IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E93.D , Issue 11
Showing 1-33 articles out of 33 articles from the selected issue
Special Section on Architectures, Protocols, and Applications for the Future Internet
  • Fumio TERAOKA
    2010 Volume E93.D Issue 11 Pages 2897
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Download PDF (53K)
  • Eun-Jun YOON, Kee-Young YOO
    Type: PAPER
    2010 Volume E93.D Issue 11 Pages 2898-2906
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    In 2006, Yeh and Tsai proposed a mobile commerce security mechanism. However, in 2008, Yum et al. pointed out that Yeh-Tsai security mechanism is not secure against malicious WAP gateways and then proposed a simple countermeasure against the attack is to use a cryptographic hash function instead of the addition operation. Nevertheless, this paper shows that both Yeh-Tsai's and Yum et al.'s security mechanisms still do not provide perfect forward secrecy and are susceptible to an off-line guessing attack and Denning-Sacco attack. In addition, we propose a new security mechanism to overcome the weaknesses of the previous related security mechanisms.
    Download PDF (179K)
  • Xiao XU, Weizhe ZHANG, Hongli ZHANG, Binxing FANG
    Type: PAPER
    2010 Volume E93.D Issue 11 Pages 2907-2921
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    The basic requirements of the distributed Web crawling systems are: short download time, low communication overhead and balanced load which largely depends on the systems' Web partition strategies. In this paper, we propose a DHT-based distributed Web crawling system and several DHT-based Web partition methods. First, a new system model based on a DHT method called the Content Addressable Network (CAN) is proposed. Second, based on this model, a network-distance-based Web partition is implemented to reduce the crawler-crawlee network distance in a fully distributed manner. Third, by utilizing the locality on the link space, we propose the concept of link-based Web partition to reduce the communication overhead of the system. This method not only reduces the number of inter-links to be exchanged among the crawlers but also reduces the cost of routing on the DHT overlay. In order to combine the benefits of the above two Web partition methods, we then propose 2 distributed multi-objective Web partition methods. Finally, all the methods we propose in this paper are compared with existing system models in the simulated experiments under different datasets and different system scales. In most cases, the new methods show their superiority.
    Download PDF (4458K)
  • Shigeya SUZUKI, Rodney VAN METER, Osamu NAKAMURA, Jun MURAI
    Type: PAPER
    2010 Volume E93.D Issue 11 Pages 2922-2931
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    We present a novel RFID middleware architecture, Otedama, which makes use of a unique property of RFID information to improve performance. RFID tags are bound to items. New information related to an RFID tag is generated at the site where the ID exists, and the entity most interested in the history and the item itself is in close proximity to the RFID tag. To exploit this property, we propose a scheme which bundles information related to a specific ID into one object and moves that bundle to a nearby server as the RFID tag moves from place to place. By using this scheme, information is always accessible by querying a system near the physical location of the tag, providing better query performance. Additionally, the volume of records that must be kept by a repository manager is reduced, because the relocation naturally migrates data away as physical objects move. We show the effectiveness of this architecture by analyzing data from a major retailer, finding that information retrieval performance will be six times better, and the cost of search is possibly several times cheaper.
    Download PDF (569K)
  • Mohammad BEHDADFAR, Hossein SAIDI, Masoud-Reza HASHEMI, Ali GHIASIAN, ...
    Type: PAPER
    2010 Volume E93.D Issue 11 Pages 2932-2943
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Recently, we have proposed a new prefix lookup algorithm which would use the prefixes as scalar numbers. This algorithm could be applied to different tree structures such as Binary Search Tree and some other balanced trees like RB-tree, AVL-tree and B-tree with minor modifications in the search, insert and/or delete procedures to make them capable of finding the prefixes of an incoming string e.g. an IP address. As a result, the search procedure complexity would be O(log n) where n is the number of prefixes stored in the tree. More important, the search complexity would not depend on the address length w i.e. 32 for IPv4 and 128 for IPv6. Here, it is assumed that interface to memory is wide enough to access the prefix and some simple operations like comparison can be done in O(1) even for the word length w. Moreover, insertion and deletion procedures of this algorithm are much simpler and faster than its competitors. In what follows, we report the software implementation results of this algorithm and compare it with other solutions for both IPv4 and IPv6. It also reports on a simple hardware implementation of the algorithm for IPv4. Comparison results show better lookup and update performances or superior storage requirements for Scalar Prefix Search in both average and worst cases.
    Download PDF (534K)
  • Rentao GU, Hongxiang WANG, Yongmei SUN, Yuefeng JI
    Type: PAPER
    2010 Volume E93.D Issue 11 Pages 2944-2952
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    A novel approach for fast traffic classification for the high speed networks is proposed, which bases on the protocol behavior statistical features. The packet size and a new parameter named “Estimated Protocol Processing Time” are collected from the real data flows. Then a set of joint probability distributions is obtained to describe the protocol behaviors and classify the traffic. Comparing the parameters of an unknown flow with the pre-obtained joint distributions, we can judge which application protocol the unknown flow belongs to. Distinct from other methods based on traditional inter-arrival time, we use the “Estimated Protocol Processing Time” to reduce the location dependence and time dependence and obtain better results than traditional traffic classification method. Since there is no need for character string searching and parallel feature for hardware implementation with pipeline-mode data processing, the proposed approach can be easily deployed in the hardware for real-time classification in the high speed networks.
    Download PDF (1381K)
  • Heru SUKOCO, Yoshiaki HORI, Hendrawan , Kouichi SAKURAI
    Type: PAPER
    2010 Volume E93.D Issue 11 Pages 2953-2961
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    The distribution of streaming multicast and real time audio/video applications in the Internet has been quickly increased in the Internet. Commonly, these applications rarely use congestion control and do not fairly share provided network capacity with TCP-based applications such as HTTP, FTP and emails. Therefore, Internet communities will be threatened by the increase of non-TCP-based applications that likely cause a significant increase of traffics congestion and starvation. This paper proposes a set of mechanisms, such as providing various data rates, background traffics, and various scenarios, to act friendly with TCP when sending multicast traffics. By using 8 scenarios of simulations, we use 6 layered multicast transmissions with background traffic Pareto with the shape factor 1.5 to evaluate performance metrics such as throughput, delay/latency, jitter, TCP friendliness, packet loss ratio, and convergence time. Our study shows that non TCP traffics behave fairly and respectful of the co-existent TCP-based applications that run on shared link transmissions even with background traffic. Another result shows that the simulation has low values on throughput, vary in jitter (0-10ms), and packet loss ratio > 3%. It was also difficult to reach convergence time quickly when involving only non TCP traffics.
    Download PDF (2006K)
Regular Section
  • Ichiro MITSUHASHI, Michio OYAMAGUCHI, Kunihiro MATSUURA
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 11 Pages 2962-2978
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    The unification problem for term rewriting systems (TRSs) is the problem of deciding, for a TRS R and two terms s and t, whether s and t are unifiable modulo R. We have shown that the problem is decidable for confluent simple TRSs. Here, a simple TRS means one where the right-hand side of every rewrite rule is a ground term or a variable. In this paper, we extend this result and show that the unification problem for confluent semi-constructor TRSs is decidable. Here, a semi-constructor TRS means one where all defined symbols appearing in the right-hand side of each rewrite rule occur only in its ground subterms.
    Download PDF (358K)
  • In-Cheol PARK, Tae-Hwan KIM
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 11 Pages 2979-2988
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Square-related functions such as square, inverse square, square-root and inverse square-root operations are widely used in digital signal processing and digital communication algorithms, and their efficient realizations are commonly required to reduce the hardware complexity. In the implementation point of view, approximate realizations are often desired if they do not degrade performance significantly. In this paper, we propose new linear approximations for the square-related functions. The traditional linear approximations need multipliers to calculate slope offsets and tables to store initial offset values and slope values, whereas the proposed approximations exploit the inherent properties of square-related functions to linearly interpolate with only simple operations, such as shift, concatenation and addition, which are usually supported in modern VLSI systems. Regardless of the bit-width of the number system, more importantly, the maximum relative errors of the proposed approximations are bounded to 6.25% and 3.13% for square and square-root functions, respectively. For inverse square and inverse square-root functions, the maximum relative errors are bounded to 12.5% and 6.25% if the input operands are represented in 20bits, respectively.
    Download PDF (784K)
  • Sung Kwon KIM
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 11 Pages 2989-2994
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Let T be a tree with n nodes, in which each edge is associated with a length and a weight. The density-constrained longest (heaviest) path problem is to find a path of T with maximum path length (weight) whose path density is bounded by an upper bound and a lower bound. The path density is the path weight divided by the path length. We show that both problems can be solved in optimal O(n log n) time.
    Download PDF (204K)
  • Chuzo IWAMOTO, Kento SASAKI, Kenji NISHIO, Kenichi MORITA
    Type: PAPER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 11 Pages 2995-3004
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    A disentanglement puzzle consists of mechanically interlinked pieces, and the puzzle is solved by disentangling one piece from another set of pieces. A cast puzzle is a type of disentanglement puzzle, where each piece is a zinc die-casting alloy. In this paper, we consider the generalized cast puzzle problem whose input is the layout of a finite number of pieces (polyhedrons) in the 3-dimensional Euclidean space. For every integer k ≥ 0, we present a polynomial-time transformation from an arbitrary k-exponential-space Turing machine M and its input x to a cast puzzle c1 of size k-exponential in |x| such that M accepts x if and only if c1 is solvable. Here, the layout of c1 is encoded as a string of length polynomial (even if c1 has size k-exponential). Therefore, the cast puzzle problem of size k-exponential is k-EXPSPACE-hard for every integer k ≥ 0. We also present a polynomial-time transformation from an arbitrary instance f of the SAT problem to a cast puzzle c2 such that f is satisfiable if and only if c2 is solvable.
    Download PDF (647K)
  • Malik Jahan KHAN, Mian Muhammad AWAIS, Shafay SHAMAIL
    Type: PAPER
    Subject area: Computer System
    2010 Volume E93.D Issue 11 Pages 3005-3016
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Inspired from natural self-managing behavior of the human body, autonomic systems promise to inject self-managing behavior in software systems. Such behavior enables self-configuration, self-healing, self-optimization and self-protection capabilities in software systems. Self-configuration is required in systems where efficiency is the key issue, such as real time execution environments. To solve self-configuration problems in autonomic systems, the use of various problem-solving techniques has been reported in the literature including case-based reasoning. The case-based reasoning approach exploits past experience that can be helpful in achieving autonomic capabilities. The learning process improves as more experience is added in the case-base in the form of cases. This results in a larger case-base. A larger case-base reduces the efficiency in terms of computational cost. To overcome this efficiency problem, this paper suggests to cluster the case-base, subsequent to find the solution of the reported problem. This approach reduces the search complexity by confining a new case to a relevant cluster in the case-base. Clustering the case-base is a one-time process and does not need to be repeated regularly. The proposed approach presented in this paper has been outlined in the form of a new clustered CBR framework. The proposed framework has been evaluated on a simulation of Autonomic Forest Fire Application (AFFA). This paper presents an outline of the simulated AFFA and results on three different clustering algorithms for clustering the case-base in the proposed framework. The comparison of performance of the conventional CBR approach and clustered CBR approach has been presented in terms of their Accuracy, Recall and Precision (ARP) and computational efficiency.
    Download PDF (1050K)
  • Young-Jin KIM, Jihong KIM, Jeong-Bae LEE, Kee-Wook RIM
    Type: PAPER
    Subject area: Software System
    2010 Volume E93.D Issue 11 Pages 3017-3026
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    In disk-based storage systems, non-volatile write caches have been widely used to reduce write latency as well as to ensure data consistency at the level of a storage controller. Write cache policies should basically consider which data is important to cache and evict, and they should also take into account the real I/O features of a non-volatile device. However, existing work has mainly focused on improving basic cache operations, but has not considered the I/O cost of a non-volatile device properly. In this paper, we propose a pattern-aware write cache policy, PAW for a NAND flash memory in disk-based mobile storage systems. PAW is designed to face a mix of a number of sequential accesses and fewer non-sequential ones in mobile storage systems by redirecting the latter to a NAND flash memory and the former to a disk. In addition, PAW employs the synergistic effect of combining a pattern-aware write cache policy and an I/O clustering-based queuing method to strengthen the sequentiality with the aim of reducing the overall system I/O latency. For evaluations, we have built a practical hard disk simulator with a non-volatile cache of a NAND flash memory. Experimental results show that our policy significantly improves the overall I/O performance by reducing the overhead from a non-volatile cache considerably over a traditional one, achieving a high efficiency in energy consumption.
    Download PDF (383K)
  • Masato ASAHARA, Kenji KONO, Toshinori KOJIMA, Ai HAYAKAWA
    Type: PAPER
    Subject area: Software System
    2010 Volume E93.D Issue 11 Pages 3027-3037
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Many services rely on the Internet to provide their customers with immediate access to information. To provide a stable service to a large number of customers, a service provider needs to monitor demand fluctuations and adjust the number and the location of replica servers around the world. Unfortunately, flash crowds make it quite difficult to determine good number and locations of replica servers because they must be repositioned very quickly to respond to rapidly changing demands. We are developing ExaPeer, an infrastructure for dynamically repositioning replica servers on the Internet on the basis of demand fluctuations. In this paper we introduce ExaPeer Server Reposition (EPSR), a mechanism that quickly finds appropriate number and locations of replica servers. EPSR is designed to be lightweight and responsive to flash crowds. EPSR enables us to position replica servers so that no server becomes overloaded. Even though no dedicated server collects global information such as the distribution of clients or the load of all servers over the Internet, the peer-to-peer approach enables EPSR to find number and locations of replica servers quickly enough to respond to flash crowds. Simulation results demonstrate that EPSR locates high-demand areas, estimates their scale correctly and determines appropriate number and locations of replica servers even if the demand for a service increases/decreases rapidly.
    Download PDF (575K)
  • Ana Erika CAMARGO CRUZ, Koichiro OCHIMIZU
    Type: PAPER
    Subject area: Software Engineering
    2010 Volume E93.D Issue 11 Pages 3038-3050
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Design-complexity metrics, while measured from the code, have shown to be good predictors of fault-prone object-oriented programs. Some of the most often used metrics are the Chidamber and Kemerer metrics (CK). This paper discusses how to make early predictions of fault-prone object-oriented classes, using a UML approximation of three CK metrics. First, we present a simple approach to approximate Weighted Methods per Class (WMC), Response For Class (RFC) and Coupling Between Objects (CBO) CK metrics using UML collaboration diagrams. Then, we study the application of two data normalization techniques. Such study has a twofold purpose: to decrease the error approximation in measuring the mentioned CK metrics from UML diagrams, and to obtain a more similar data distribution of these metrics among software projects so that better prediction results are obtained when using the same prediction model across different software projects. Finally, we construct three prediction models with the source code of a package of an open source software project (Mylyn from Eclipse), and we test them with several other packages and three different small size software projects, using their UML and code metrics for comparison. The results of our empirical study lead us to conclude that the proposed UML RFC and UML CBO metrics can predict fault-proneness of code almost with the same accuracy as their respective code metrics do. The elimination of outliers and the normalization procedure used were of great utility, not only for enabling our UML metrics to predict fault-proneness of code using a code-based prediction model but also for improving the prediction results of our models across different software packages and projects.
    Download PDF (684K)
  • Tetsuo IMAI, Atsushi TANAKA
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2010 Volume E93.D Issue 11 Pages 3051-3058
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Recent studies investigating the Internet topology reported that inter Autonomous System (AS) topology exhibits a power-law degree distribution which is known as the scale-free property. Although there are many models to generate scale-free topologies, no game theoretic approaches have been proposed yet. In this paper, we propose the new dynamic game theoretic model for the AS level Internet topology formation. Through numerical simulations, we show our process tends to give emergence of the topologies which have the scale-free property especially in the case of large decay parameters and large random link costs. The significance of our study is summarized as following three topics. Firstly, we show that scale-free topologies can also emerge from the game theoretic model. Secondly, we propose the new dynamic process of the network formation game for modeling a process of AS topology formation, and show that our model is appropriate in the micro and macro senses. In the micro sense, our topology formation process is appropriate because this represents competitive and distributed situation observed in the real AS level Internet topology formation process. In the macro sense, some of statistical properties of emergent topologies from our process are similar to those of which also observed in the real AS level Internet topology. Finally, we demonstrate the numerical simulations of our process which is deterministic variation of dynamic process of network formation game with transfers. This is also the new result in the field of the game theory.
    Download PDF (472K)
  • Kuo-Chen HUNG, Yuan-Cheng TSAI, Kuo-Ping LIN, Peterson JULIAN
    Type: PAPER
    Subject area: Office Information Systems, e-Business Modeling
    2010 Volume E93.D Issue 11 Pages 3059-3065
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Several papers have presented measured function to handle multi-criteria fuzzy decision-making problems based on interval-valued intuitionistic fuzzy sets. However, in some cases, the proposed function cannot give sufficient information about alternatives. Consequently, in this paper, we will overcome previous insufficient problem and provide a novel accuracy function to measure the degree of the interval-valued intuitionistic fuzzy information. And a practical example has been provided to demonstrate our proposed approach. In addition, to make computing and ranking results easier and to increase the recruiting productivity, a computer-based interface system has been developed for decision makers to make decisions more efficiently.
    Download PDF (471K)
  • Mitsuharu MATSUMOTO
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 11 Pages 3066-3075
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    This paper describes a nonlinear filter that can extract the image feature from noise corrupted image labeled self-quotient ε-filter (SQEF). SQEF is an improved self-quotient filter (SQF) to extract the image feature from noise corrupted image. Although SQF is a simple approach for feature extraction from the images, it is difficult to extract the feature when the image includes noise. On the other hand, SQEF can extract the image feature not only from clear images but also from noise corrupted images with uniform noise, Gaussian noise and impulse noise. We show the algorithm of SQEF and describe its feature when it is applied to uniform noise corrupted image, Gaussian noise corrupted image and impulse noise corrupted image. Experimental results are also shown to confirm the effectiveness of the proposed method.
    Download PDF (4074K)
  • Gang Yeon KIM, Kwan H. LEE
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 11 Pages 3076-3087
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    We present a new method that can represent the reflectance of metallic paints accurately using a two-layer reflectance model with sampled microfacet distribution functions. We model the structure of metallic paints simplified by two layers: a binder surface that follows a microfacet distribution and a sub-layer that also follows a facet distribution. In the sub-layer, the diffuse and the specular reflectance represent color pigments and metallic flakes respectively. We use an iterative method based on the principle of Gauss-Seidel relaxation that stably fits the measured data to our highly non-linear model. We optimize the model by handling the microfacet distribution terms as a piecewise linear non-parametric form in order to increase its degree of freedom. The proposed model is validated by applying it to various metallic paints. The results show that our model has better fitting performance compared to the models used in other studies. Our model provides better accuracy due to the non-parametric terms employed in the model, and also gives efficiency in analyzing the characteristics of metallic paints by the analytical form embedded in the model. The non-parametric terms for the microfacet distribution in our model require densely measured data but not for the entire BRDF(bidirectional reflectance distribution function) domain, so that our method can reduce the burden of data acquisition during measurement. Especially, it becomes efficient for a system that uses a curved-sample based measurement system which allows us to obtain dense data in microfacet domain by a single measurement.
    Download PDF (1460K)
  • Sven FORSTMANN, Jun OHYA
    Type: PAPER
    Subject area: Computer Graphics
    2010 Volume E93.D Issue 11 Pages 3088-3099
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    This paper proposes a GPU-based method that can visualize voxelized surface data with fine and complicated features, has high rendering quality at interactive frame rates, and provides low memory consumption. The surface data is compressed using run-length encoding (RLE) for each level of detail (LOD). Then, the loop for the rendering process is performed on the GPU for the position of the viewpoint at each time instant. The scene is raycasted in planes, where each plane is perpendicular to the horizontal plane in the world coordinate system and passes through the viewpoint. For each plane, one ray is cast to rasterize all RLE elements intersecting this plane, starting from the viewpoint and ranging up to the maximum view distance. This rasterization process projects each RLE element passing the occlusion test onto the screen at a LOD that decreases with the distance of the RLE element from the viewpoint. Finally, the smoothing of voxels in screen space and full screen anti-aliasing is performed. To provide lighting calculations without storing the normal vector inside the RLE data structure, our algorithm recovers the normal vectors from the rendered scene's depth buffer. After the viewpoint changes, the same process is re-executed for the new viewpoint. Experiments using different scenes have shown that the proposed algorithm is faster than the equivalent CPU implementation and other related methods. Our experiments further prove that this method is memory efficient and achieves high quality results.
    Download PDF (1206K)
  • Ching-Chi CHEN, Wei-Yen HSU, Shih-Hsuan CHIU, Yung-Nien SUN
    Type: PAPER
    Subject area: Biological Engineering
    2010 Volume E93.D Issue 11 Pages 3100-3107
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Image registration is an important topic in medical image analysis. It is usually used in 2D mosaics to construct the whole image of a biological specimen or in 3D reconstruction to build up the structure of an examined specimen from a series of microscopic images. Nevertheless, owing to a variety of factors, including microscopic optics, mechanisms, sensors, and manipulation, there may be great differences between the acquired image slices even if they are adjacent. The common differences include the chromatic aberration as well as the geometry discrepancy that is caused by cuts, tears, folds, and deformation. They usually make the registration problem a difficult challenge to achieve. In this paper, we propose an efficient registration method, which consists of a feature-based registration approach based on analytic robust point matching (ARPM) and a refinement procedure of the feature-based Levenberg-Marquardt algorithm (FLM), to automatically reconstruct 3D vessels of the rat brains from a series of microscopic images. The registration algorithm could speedily evaluate the spatial correspondence and geometric transformation between two point sets with different sizes. In addition, to achieve subpixel accuracy, an FLM method is used to refine the registered results. Due to the nonlinear characteristic of FLM method, it converges much faster than most other methods. We evaluate the performance of proposed method by comparing it with well-known thin-plate spline robust point matching (TPS-RPM) algorithm. The results indicate that the ARPM algorithm together with the FLM method is not only a robust but efficient method in image registration.
    Download PDF (701K)
  • Kenji TSUCHIE, Yoshiko HANADA, Seiji MIYOSHI
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 11 Pages 3108-3111
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    We propose an “estimation of distribution algorithm” incorporating switching. The algorithm enables switching from the standard estimation of distribution algorithm (EDA) to the genetic algorithm (GA), or vice versa, on the basis of switching criteria. The algorithm shows better performance than GA and EDA in deceptive problems.
    Download PDF (112K)
  • Heekwon PARK, Seungjae BAEK, Jongmoo CHOI
    Type: LETTER
    Subject area: Computer System
    2010 Volume E93.D Issue 11 Pages 3112-3115
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    The traditional mobile consumer electronics such as media players and smart phones use two distinct memories, SDRAM and Flash memory. SDRAM is used as main memory since it has characteristic of byte-unit random accessibility while Flash memory as secondary storage due to its characteristic of non-volatility. However, the advent of Storage Class Memory (SCM) that supports both SDRAM and Flash memory characteristics gives an opportunity to design a new system configuration. In this paper, we explore four feasible system configurations, namely RAM-Flash, RAM-SCM, SCM-Flash and SCM-Only. Then, using a real embedded system equipped with FeRAM, a type of SCM, we analyze the tradeoffs between performance and energy efficiency of each configuration. Experimental results have shown that SCM has great potential to reduce energy consumption for all configurations while performance is highly application dependent and might be degraded on the SCM-Flash and SCM-Only configuration.
    Download PDF (223K)
  • Yong CAO, Qingxin ZHU
    Type: LETTER
    Subject area: Software Engineering
    2010 Volume E93.D Issue 11 Pages 3116-3119
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    The software reliability is the ability of the software to perform its required function under stated conditions for a stated period of time. In this paper, a hybrid methodology that combines both ARIMA and fractal models is proposed to take advantage of unique strength of ARIMA and fractal in linear and nonlinear modeling. Based on the experiments performed on the software reliability data obtained from literatures, it is observed that our method is effective through comparison with other methods and a new idea for the research of the software failure mechanism is presented.
    Download PDF (152K)
  • Woong-Kee LOH, Yang-Sae MOON, Jun-Gyu KANG
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 11 Pages 3120-3123
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we emphasize the need for data cleansing when clustering large-scale transaction databases and propose a new data cleansing method that improves clustering quality and performance. We evaluate our data cleansing method through a series of experiments. As a result, the clustering quality and performance were significantly improved by up to 165% and 330%, respectively.
    Download PDF (252K)
  • Jihwan SONG, Xing XIE, Yoon-Joon LEE, Ji-Rong WEN
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 11 Pages 3124-3127
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Mobile devices such as cell phones and personal digital assistants (PDAs) are becoming increasingly popular tools to access the Internet. Unfortunately, the experience of users attempting to access web pages with these mobile devices has been less than satisfactory because of their small display areas, slow communications links and low computing power. In this paper, we propose a trained scorer to estimate the mobile-friendliness scores of web pages, providing an indication of their suitability for mobile devices. These scores help mobile-friendly pages receive higher ranks in search results when mobile users seek information on the web. Our experiments show that the search results re-ranked by our mobile-friendliness scores increase mobile user satisfaction.
    Download PDF (367K)
  • Suk Tae SEO, In Keun LEE, Seo Ho SON, Hyong Gun LEE, Soon Hak KWON
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 11 Pages 3128-3131
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    We propose a simple but effective image segmentation method not based on thresholding but on a merging strategy by evaluating joint probability of gray levels on co-occurrence matrix. The effectiveness of the proposed method is shown through a segmentation experiment.
    Download PDF (819K)
  • Ju Hyun PARK, Young-Chul KIM, Hong-Sung HOON
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 11 Pages 3132-3135
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we propose a new motion vector smoothing algorithm using weighted vector median filtering based on edge direction for frame interpolation. The proposed WVM (Weighted Vector Median) system adjusts the weighting values based on edge direction, which is derived from spatial coherence between the edge direction continuity of a moving object and motion vector (MV) reliability. The edge based weighting scheme removes the effect of outliers and irregular MVs from the MV smoothing process. Simulation results show that the proposed algorithm can correct wrong motion vectors and thus improve both the subjective and objective visual quality compared with conventional methods.
    Download PDF (634K)
  • Yan LI, Siwei LUO, Qi ZOU
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 11 Pages 3136-3139
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    This paper combines the LBP operator and the active contour model. It introduces a salient gradient vector flow snake (SGVF snake), based on a novel edge map generated from the salient boundary point image (SBP image). The MDGVM criterion process helps to reduce feature detail and background noise as well as retaining the salient boundary points. The resultant SBP image as an edge map gives powerful support to the SGVF snake because of the inherent combination of the intensity, gradient and texture cues. Experiments prove that the MDGVM process has high efficiency in reducing outliers and the SGVF snake is a large improvement over the GVF snake for contour detection, especially in natural images with low contrast and small texture background.
    Download PDF (276K)
  • JunSeong KIM, Jongsu YI
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 11 Pages 3140-3143
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Vision sensors provide rich sources of information, but sensing images and processing them in real time would be a challenging task. This paper introduces a vision system using SoCBase platform and presents heuristic designs of SAD correlation algorithm as a component of the vision system. Simulation results show that the vision system is suitable for real-time applications and that the heuristic designs of SAD algorithm are worth utilizing since they save a considerable amount of space with little sacrificing in quality.
    Download PDF (209K)
  • Lv GUO, Yin LI, Jie YANG, Li LU
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 11 Pages 3144-3148
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    A novel method for single image super resolution without any training samples is presented in the paper. By sparse representation, the method attempts to recover at each pixel its best possible resolution increase based on the self similarity of the image patches across different scale and rotation transforms. The experiments indicate that the proposed method can produce robust and competitive results.
    Download PDF (7782K)
  • Yuefei ZHANG, Mei XIE, Ling MAO
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 11 Pages 3149-3152
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    In this letter, we first study the impact of the basic reference frame jitter on the digital image stabilization. Next, a method for stabilizing the digital image sequence based on the correction for basic reference frame jitter is proposed. The experimental results show that our proposed method can effectively decrease the excessive undefined areas in the stable image sequence resulting from the basic reference frame jitter.
    Download PDF (236K)
  • In Keun LEE, Soon Hak KWON
    Type: LETTER
    Subject area: Biocybernetics, Neurocomputing
    2010 Volume E93.D Issue 11 Pages 3153-3157
    Published: November 01, 2010
    Released: November 01, 2010
    JOURNALS FREE ACCESS
    Time is considered as an important factor in modeling and operation of dynamic systems. However, few studies have considered time factor in modeling and inference of fuzzy cognitive maps (FCMs), besides, no studies have dealt with time delay in learning of FCMs. Therefore, we propose a learning rule for temporal FCMs involving post- and pre-delay time by extending Oja's learning rule. We show the effectiveness of the proposed rule through simulations which solve a time-delayed chemical plant control problem.
    Download PDF (178K)
feedback
Top