IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Volume E93.D , Issue 10
Showing 1-28 articles out of 28 articles from the selected issue
Special Section on Data Mining and Statistical Science
  • Masashi SUGIYAMA
    2010 Volume E93.D Issue 10 Pages 2671
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Download PDF (57K)
  • Hisashi KASHIMA, Satoshi OYAMA, Yoshihiro YAMANISHI, Koji TSUDA
    Type: PAPER
    2010 Volume E93.D Issue 10 Pages 2672-2679
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Pairwise classification has many applications including network prediction, entity resolution, and collaborative filtering. The pairwise kernel has been proposed for those purposes by several research groups independently, and has been used successfully in several fields. In this paper, we propose an efficient alternative which we call a Cartesian kernel. While the existing pairwise kernel (which we refer to as the Kronecker kernel) can be interpreted as the weighted adjacency matrix of the Kronecker product graph of two graphs, the Cartesian kernel can be interpreted as that of the Cartesian graph, which is more sparse than the Kronecker product graph. We discuss the generalization bounds of the two pairwise kernels by using eigenvalue analysis of the kernel matrices. Also, we consider the N-wise extensions of the two pairwise kernels. Experimental results show the Cartesian kernel is much faster than the Kronecker kernel, and at the same time, competitive with the Kronecker kernel in predictive performance.
    Download PDF (411K)
  • Yukito IBA, Shotaro AKAHO
    Type: PAPER
    2010 Volume E93.D Issue 10 Pages 2680-2689
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Regression analysis that incorporates measurement errors in input variables is important in various applications. In this study, we consider this problem within a framework of Gaussian process regression. The proposed method can also be regarded as a generalization of kernel regression to include errors in regressors. A Markov chain Monte Carlo method is introduced, where the infinite-dimensionality of Gaussian process is dealt with a trick to exchange the order of sampling of the latent variable and the function. The proposed method is tested with artificial data.
    Download PDF (379K)
  • Masashi SUGIYAMA
    Type: PAPER
    2010 Volume E93.D Issue 10 Pages 2690-2701
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Kernel logistic regression (KLR) is a powerful and flexible classification algorithm, which possesses an ability to provide the confidence of class prediction. However, its training—typically carried out by (quasi-)Newton methods—is rather time-consuming. In this paper, we propose an alternative probabilistic classification algorithm called Least-Squares Probabilistic Classifier (LSPC). KLR models the class-posterior probability by the log-linear combination of kernel functions and its parameters are learned by (regularized) maximum likelihood. In contrast, LSPC employs the linear combination of kernel functions and its parameters are learned by regularized least-squares fitting of the true class-posterior probability. Thanks to this linear regularized least-squares formulation, the solution of LSPC can be computed analytically just by solving a regularized system of linear equations in a class-wise manner. Thus LSPC is computationally very efficient and numerically stable. Through experiments, we show that the computation time of LSPC is faster than that of KLR by two orders of magnitude, with comparable classification accuracy.
    Download PDF (1255K)
  • The Dung LUONG, Tu Bao HO
    Type: PAPER
    2010 Volume E93.D Issue 10 Pages 2702-2708
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Recently, privacy preservation has become one of the key issues in data mining. In many data mining applications, computing frequencies of values or tuples of values in a data set is a fundamental operation repeatedly used. Within the context of privacy preserving data mining, several privacy preserving frequency mining solutions have been proposed. These solutions are crucial steps in many privacy preserving data mining tasks. Each solution was provided for a particular distributed data scenario. In this paper, we consider privacy preserving frequency mining in a so-called 2-part fully distributed setting. In this scenario, the dataset is distributed across a large number of users in which each record is owned by two different users, one user only knows the values for a subset of attributes, while the other knows the values for the remaining attributes. A miner aims to compute the frequencies of values or tuples of values while preserving each user's privacy. Some solutions based on randomization techniques can address this problem, but suffer from the tradeoff between privacy and accuracy. We develop a cryptographic protocol for privacy preserving frequency mining, which ensures each user's privacy without loss of accuracy. The experimental results show that our protocol is efficient as well.
    Download PDF (161K)
  • Md. Anisuzzaman SIDDIQUE, Yasuhiko MORIMOTO
    Type: PAPER
    2010 Volume E93.D Issue 10 Pages 2709-2716
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Given a set of objects, a skyline query finds the objects that are not dominated by others. We consider a skyline query for sets of objects in a database in this paper. Let s be the number of objects in each set and n be the number of objects in the database. The number of sets in the database amounts to nCs. We propose an efficient algorithm to compute convex skyline of the nCs sets. We call the retrieve skyline objectsets as “convex skyline objectsets”. Experimental evaluation using real and synthetic datasets demonstrates that the proposed skyline objectset query is meaningful and is scalable enough to handle large and high dimensional databases. Recently, we have to aware individual's privacy. Sometimes, we have to hide individual values and are only allowed to disclose aggregated values of objects. In such situation, we cannot use conventional skyline queries. The proposed function can be a promising alternative in decision making in a privacy aware environment.
    Download PDF (460K)
  • Makoto NAKATSUJI, Akimichi TANAKA, Takahiro MADOKORO, Kenichiro OKAMOT ...
    Type: PAPER
    2010 Volume E93.D Issue 10 Pages 2717-2727
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Product developers frequently discuss topics related to their development project with others, but often use technical terms whose meanings are not clear to non-specialists. To provide non-experts with precise and comprehensive understanding of the know-who/know-how being discussed, the method proposed herein categorizes the messages using a taxonomy of the products being developed and a taxonomy of tasks relevant to those products. The instances in the taxonomy are products and/or tasks manually selected as relevant to system development. The concepts are defined by the taxonomy of instances. That proposed method first extracts phrases from discussion logs as data-driven instances relevant to system development. It then classifies those phrases to the concepts defined by taxonomy experts. The innovative feature of our method is that in classifying a phrase to a concept, say C, the method considers the associations of the phrase with not only the instances of C, but also with the instances of the neighbor concepts of C (neighbor is defined by the taxonomy). This approach is quite accurate in classifying phrases to concepts; the phrase is classified to C, not the neighbors of C, even though they are quite similar to C. Next, we attach a data-driven concept to C; the data-driven concept includes instances in C and a classified phrase as a data-driven instance. We analyze know-who and know-how by using not only human-defined concepts but also those data-driven concepts. We evaluate our method using the mailing-list of an actual project. It could classify phrases with twice the accuracy possible with the TF/iDF method, which does not consider the neighboring concepts. The taxonomy with data-driven concepts provides more detailed know-who/know-how than can be obtained from just the human-defined concepts themselves or from the data-driven concepts as determined by the TF/iDF method.
    Download PDF (1061K)
  • Shaopeng TANG, Satoshi GOTO
    Type: PAPER
    2010 Volume E93.D Issue 10 Pages 2728-2736
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    In this paper, a human detection method is developed. An appearance based detector and a motion based detector are proposed respectively. A multi scale block histogram of template feature (MB-HOT) is used to detect human by the appearance. It integrates the gray value information and the gradient value information, and represents the relationship of three blocks. Experiment on INRIA dataset shows that this feature is more discriminative than other features, such as histogram of orientation gradient (HOG). A motion based feature is also proposed to capture the relative motion of human body. This feature is calculated in optical flow domain and experimental result in our dataset shows that this feature outperforms other motion based features. The detection responses obtained by two features are combined to reduce the false detection. Graphic process unit (GPU) based implementation is proposed to accelerate the calculation of two features, and make it suitable for real time applications.
    Download PDF (562K)
Regular Section
  • Tetsuo YOKOYAMA, Gang ZENG, Hiroyuki TOMIYAMA, Hiroaki TAKADA
    Type: PAPER
    Subject area: Software System
    2010 Volume E93.D Issue 10 Pages 2737-2746
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    The principles for good design of battery-aware voltage scheduling algorithms for both aperiodic and periodic task sets on dynamic voltage scaling (DVS) systems are presented. The proposed algorithms are based on greedy heuristics suggested by several battery characteristics and Lagrange multipliers. To construct the proposed algorithms, we use the battery characteristics in the early stage of scheduling more properly. As a consequence, the proposed algorithms show superior results on synthetic examples of periodic and aperiodic tasks from the task sets which are excerpted from the comparative work, on uni- and multi-processor platforms, respectively. In particular, for some large task sets, the proposed algorithms enable previously unschedulable task sets due to battery exhaustion to be schedulable.
    Download PDF (668K)
  • Xiao XU, Weizhe ZHANG, Hongli ZHANG, Binxing FANG
    Type: PAPER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 10 Pages 2747-2762
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Internet computing is proposed to exploit personal computing resources across the Internet in order to build large-scale Web applications at lower cost. In this paper, a DHT-based distributed Web crawling model based on the concept of Internet computing is proposed. Also, we propose two optimizations to reduce the download time and waiting time of the Web crawling tasks in order to increase the system's throughput and update rate. Based on our contributor-friendly download scheme, the improvement on the download time is achieved by shortening the crawler-crawlee RTTs. In order to accurately estimate the RTTs, a network coordinate system is combined with the underlying DHT. The improvement on the waiting time is achieved by redirecting the incoming crawling tasks to light-loaded crawlers in order to keep the queue on each crawler equally sized. We also propose a simple Web site partition method to split a large Web site into smaller pieces in order to reduce the task granularity. All the methods proposed are evaluated through real Internet tests and simulations showing satisfactory results.
    Download PDF (2215K)
  • Jian SHEN, Sangman MOH, Ilyong CHUNG
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 10 Pages 2763-2775
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Delay Tolerant Networks (DTNs) are a class of emerging networks that experience frequent and long-duration partitions. Delay is inevitable in DTNs, so ensuring the validity and reliability of the message transmission and making better use of buffer space are more important than concentrating on how to decrease the delay. In this paper, we present a novel routing protocol named Location and Direction Aware Priority Routing (LDPR) for DTNs, which utilizes the location and moving direction of nodes to deliver a message from source to destination. A node can get its location and moving direction information by receiving beacon packets periodically from anchor nodes and referring to received signal strength indicator (RSSI) for the beacon. LDPR contains two schemes named transmission scheme and drop scheme, which take advantage of the nodes' information of the location and moving direction to transmit the message and store the message into buffer space, respectively. Each message, in addition, is branded a certain priority according to the message's attributes (e.g. importance, validity, security and so on). The message priority decides the transmission order when delivering the message and the dropping sequence when the buffer is full. Simulation results show that the proposed LDPR protocol outperforms epidemic routing (EPI) protocol, prioritized epidemic routing (PREP) protocol, and DTN hierarchical routing (DHR) protocol in terms of packet delivery ratio, normalized routing overhead and average end-to-end delay. It is worth noting that LDPR doesn't need infinite buffer size to ensure the packet delivery ratio as in EPI. In particular, even though the buffer size is only 50, the packet delivery ratio of LDPR can still reach 93.9%, which can satisfy general communication demand. We expect LDPR to be of greater value than other existing solutions in highly disconnected and mobile networks.
    Download PDF (2255K)
  • Hideyuki ICHIHARA, Kenta SUTOH, Yuki YOSHIKAWA, Tomoo INOUE
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 10 Pages 2776-2782
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Threshold testing, which is an LSI testing method based on the acceptability of faults, is effective in yield enhancement of LSIs and selective hardening for LSI systems. In this paper, we propose test generation models for threshold test generation. Using the proposed models, we can efficiently identify acceptable faults and generate test patterns for unacceptable faults with a general test generation algorithm, i.e., without a test generation algorithm specialized for threshold testing. Experimental results show that our approach is, in practice, effective.
    Download PDF (395K)
  • Nobutaka KITO, Kensuke HANAI, Naofumi TAKAGI
    Type: PAPER
    Subject area: Information Network
    2010 Volume E93.D Issue 10 Pages 2783-2791
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    A C-testable 4-2 adder tree for an easily testable high-speed multiplier is proposed, and a recursive method for test generation is shown. By using the specific patterns that we call ‘alternately inverted patterns, ’ the adder tree, as well as partial product generators, can be tested with 14 patterns regardless of its operand size under the cell fault model. The test patterns are easily fed through the partial product generators. The hardware overhead of the 4-2 adder tree with partial product generators for a 64-bit multiplier is about 15%. By using a previously proposed easily testable adder as the final adder, we can obtain an easily testable high-speed multiplier.
    Download PDF (382K)
  • Akihiro INOKUCHI, Takashi WASHIO
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2010 Volume E93.D Issue 10 Pages 2792-2804
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    In recent years, the mining of a complete set of frequent subgraphs from labeled graph data has been studied extensively. However, to the best of our knowledge, no method has been proposed for finding frequent subsequences of graphs from a set of graph sequences. In this paper, we define a novel class of graph subsequences by introducing axiomatic rules for graph transformations, their admissibility constraints, and a union graph. Then we propose an efficient approach named “GTRACE” for enumerating frequent transformation subsequences (FTSs) of graphs from a given set of graph sequences. The fundamental performance of the proposed method is evaluated using artificial datasets, and its practicality is confirmed by experiments using real-world datasets.
    Download PDF (649K)
  • Huimin LU, Boqin FENG, Xi CHEN
    Type: PAPER
    Subject area: Artificial Intelligence, Data Mining
    2010 Volume E93.D Issue 10 Pages 2805-2812
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a visual knowledge structure reasoning method using Intelligent Topic Map which extends the conventional Topic Map in structure and enhances its reasoning functions. Visual knowledge structure reasoning method integrates two types of knowledge reasoning: the knowledge logical relation reasoning and the knowledge structure reasoning. The knowledge logical relation reasoning implements knowledge consistency checking and the implicit associations reasoning between knowledge points. We propose a Knowledge Unit Circle Search strategy for the knowledge structure reasoning. It implements the semantic implication extension, the semantic relevant extension and the semantic class belonging confirmation. Moreover, the knowledge structure reasoning results are visualized using ITM Toolkit. A prototype system of visual knowledge structure reasoning has been implemented and applied to the massive knowledge organization, management and service for education.
    Download PDF (426K)
  • Yu ZHOU, Junfeng LI, Yanqing SUN, Jianping ZHANG, Yonghong YAN, Masato ...
    Type: PAPER
    Subject area: Human-computer Interaction
    2010 Volume E93.D Issue 10 Pages 2813-2821
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    In this paper, we present a hybrid speech emotion recognition system exploiting both spectral and prosodic features in speech. For capturing the emotional information in the spectral domain, we propose a new spectral feature extraction method by applying a novel non-uniform subband processing, instead of the mel-frequency subbands used in Mel-Frequency Cepstral Coefficients (MFCC). For prosodic features, a set of features that are closely correlated with speech emotional states are selected. In the proposed hybrid emotion recognition system, due to the inherently different characteristics of these two kinds of features (e.g., data size), the newly extracted spectral features are modeled by Gaussian Mixture Model (GMM) and the selected prosodic features are modeled by Support Vector Machine (SVM). The final result of the proposed emotion recognition system is obtained by combining the results from these two subsystems. Experimental results show that (1) the proposed non-uniform spectral features are more effective than the traditional MFCC features for emotion recognition; (2) the proposed hybrid emotion recognition system using both spectral and prosodic features yields the relative recognition error reduction rate of 17.0% over the traditional recognition systems using only the spectral features, and 62.3% over those using only the prosodic features.
    Download PDF (451K)
  • Sirikan CHUCHERD, Annupan RODTOOK, Stanislav S. MAKHANOV
    Type: PAPER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 10 Pages 2822-2835
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    We propose a modification of the generalized gradient vector flow field techniques based on multiresolution analysis and phase portrait techniques. The original image is subjected to mutliresolutional analysis to create a sequence of approximation and detail images. The approximations are converted into an edge map and subsequently into a gradient field subjected to the generalized gradient vector flow transformation. The procedure removes noise and extends large gradients. At every iteration the algorithm obtains a new, improved vector field being filtered using the phase portrait analysis. The phase portrait is applied to a window with a variable size to find possible boundary points and the noise. As opposed to previous phase portrait techniques based on binary rules our method generates a continuous adjustable score. The score is a function of the eigenvalues of the corresponding linearized system of ordinary differential equations. The salient feature of the method is continuity: when the score is high it is likely to be the noisy part of the image, but when the score is low it is likely to be the boundary of the object. The score is used by a filter applied to the original image. In the neighbourhood of the points with a high score the gray level is smoothed whereas at the boundary points the gray level is increased. Next, a new gradient field is generated and the result is incorporated into the iterative gradient vector flow iterations. This approach combined with multiresolutional analysis leads to robust segmentations with an impressive improvement of the accuracy. Our numerical experiments with synthetic and real medical ultrasound images show that the proposed technique outperforms the conventional gradient vector flow method even when the filters and the multiresolution are applied in the same fashion. Finally, we show that the proposed algorithm allows the initial contour to be much farther from the actual boundary than possible with the conventional methods.
    Download PDF (1574K)
  • Kenichi KANATANI, Yasuyuki SUGAYA, Hirotaka NIITSUMA
    Type: PAPER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 10 Pages 2836-2845
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    We present an alternative approach to what we call the “standard optimization”, which minimizes a cost function by searching a parameter space. Instead, our approach “projects” in the joint observation space onto the manifold defined by the “consistency constraint”, which demands that any minimal subset of observations produce the same result. This approach avoids many difficulties encountered in the standard optimization. As typical examples, we apply it to line fitting and multiview triangulation. The latter produces a new algorithm far more efficient than existing methods. We also discuss the optimality of our approach.
    Download PDF (358K)
  • Makoto YAMADA, Masashi SUGIYAMA, Gordon WICHERN, Jaak SIMM
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 10 Pages 2846-2849
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Estimating the ratio of two probability density functions (a.k.a. the importance) has recently gathered a great deal of attention since importance estimators can be used for solving various machine learning and data mining problems. In this paper, we propose a new importance estimation method using a mixture of probabilistic principal component analyzers. The proposed method is more flexible than existing approaches, and is expected to work well when the target importance function is correlated and rank-deficient. Through experiments, we illustrate the validity of the proposed approach.
    Download PDF (122K)
  • Masahiro KIMOTO, Tatsuhiro TSUCHIYA, Tohru KIKUNO
    Type: LETTER
    Subject area: Fundamentals of Information Systems
    2010 Volume E93.D Issue 10 Pages 2850-2853
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    The exact time complexity of Hsu and Huan's self-stabilizing maximal matching algorithm is provided. It is $\\frac{1}{2}n^2 + n - 2$ if the number of nodes n is even and $\\frac{1}{2}n^2 + n - \\frac{5}{2}$ if n is odd.
    Download PDF (78K)
  • Dongook SEONG, Junho PARK, Myungho YEO, Jaesoo YOO
    Type: LETTER
    Subject area: Data Engineering, Web Information Systems
    2010 Volume E93.D Issue 10 Pages 2854-2857
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    In sensor networks, many studies have been proposed to process in-network aggregation efficiently. Unlike general aggregation queries, skyline query processing compares multi-dimensional data for the result. Therefore, it is very difficult to process the skyline queries in sensor networks. It is important to filter unnecessary data for energy-efficient skyline query processing. Existing approaches get rid of unnecessary data transmission by deploying filters to whole sensors. However, network lifetime is reduced due to energy consumption for transmitting filters. In this paper, we propose a lazy filtering-based in-network skyline query processing algorithm to reduce energy consumption by transmitting filters. Our algorithm creates the skyline filter table (SFT) in the data gathering process which sends data from sensor nodes to the base station and filters out unnecessary data transmissions using it. The experimental results show that our algorithm reduces false positive by 53% and improves network lifetime by 44% on average over the existing method.
    Download PDF (194K)
  • Daekeun MOON, Kwangil LEE, Hagbae KIM
    Type: LETTER
    Subject area: Dependable Computing
    2010 Volume E93.D Issue 10 Pages 2858-2861
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    The rapid growth of IT technology has enabled ship navigation and automation systems to gain better functionality and safety. However, they generally have their own proprietary structures and networks, which makes interfacing with and remote access to them difficult. In this paper, we propose a total ship service framework that includes a ship area network to integrate separate system networks with heterogeneity and dynamicity, and a ship-shore communication infrastructure to support a remote monitoring and maintenance service using satellite communications. Finally, we present some ship service systems to demonstrate the applicability of the proposed framework.
    Download PDF (610K)
  • Makio ISHIHARA, Yukio ISHIHARA
    Type: LETTER
    Subject area: Human-computer Interaction
    2010 Volume E93.D Issue 10 Pages 2862-2865
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    This manuscript introduces a pointing interface for a tabletop display with a reflex in eye-hand coordination. The reflex is a natural response to inconsistency between kinetic information of a mouse and visual feedback of the mouse cursor. The reflex yields information on which side the user sees the screen from, so that the screen coordinates are aligned with the user's position.
    Download PDF (188K)
  • Na DUAN, Soon Hak KWON
    Type: LETTER
    Subject area: Image Processing and Video Processing
    2010 Volume E93.D Issue 10 Pages 2866-2869
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Various contrast enhancement methods such as histogram equalization (HE) and local contrast enhancement (LCE) have been developed to increase the visibility and details of a degraded image. We propose an image contrast enhancement method based on the global and local adjustment of gray levels by combining HE with LCE methods. For the optimal combination of both, we introduce a discrete entropy. Evaluation of our experimental results shows that the proposed method outperforms both the HE and LCE methods.
    Download PDF (552K)
  • Tetsu MATSUKAWA, Takio KURITA
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 10 Pages 2870-2874
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    This paper presents a combined feature extraction method to improve the performance of bag-of-features image classification. We apply 10 relevant operations to global/local statistics of visual words. Because the pairwise combination of visual words is large, we apply feature selection methods including fisher discriminant criterion and L1-SVM. The effectiveness of the proposed method is confirmed through the experiment.
    Download PDF (463K)
  • Kazuya UEKI, Masashi SUGIYAMA, Yasuyuki IHARA
    Type: LETTER
    Subject area: Image Recognition, Computer Vision
    2010 Volume E93.D Issue 10 Pages 2875-2878
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    We address the problem of perceived age estimation from face images, and propose a new semi-supervised approach involving two novel aspects. The first novelty is an efficient active learning strategy for reducing the cost of labeling face samples. Given a large number of unlabeled face samples, we reveal the cluster structure of the data and propose to label cluster-representative samples for covering as many clusters as possible. This simple sampling strategy allows us to boost the performance of a manifold-based semi-supervised learning method only with a relatively small number of labeled samples. The second contribution is to take the heterogeneous characteristics of human age perception into account. It is rare to misjudge the age of a 5-year-old child as 15 years old, but the age of a 35-year-old person is often misjudged as 45 years old. Thus, magnitude of the error is different depending on subjects' age. We carried out a large-scale questionnaire survey for quantifying human age perception characteristics, and propose to utilize the quantified characteristics in the framework of weighted regression. Consequently, our proposed method is expressed in the form of weighted least-squares with a manifold regularizer, which is scalable to massive datasets. Through real-world age estimation experiments, we demonstrate the usefulness of the proposed method.
    Download PDF (200K)
  • Chang Wook AHN, Yehoon KIM
    Type: LETTER
    Subject area: Biocybernetics, Neurocomputing
    2010 Volume E93.D Issue 10 Pages 2879-2882
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    This paper presents an approach for improving proximity and diversity in multiobjective evolutionary algorithms (MOEAs). The idea is to discover new nondominated solutions in the promising area of search space. It can be achieved by applying mutation only to the most converged and the least crowded individuals. In other words, the proximity and diversity can be improved because new nondominated solutions are found in the vicinity of the individuals highly converged and less crowded. Empirical results on multiobjective knapsack problems (MKPs) demonstrate that the proposed approach discovers a set of nondominated solutions much closer to the global Pareto front while maintaining a better distribution of the solutions.
    Download PDF (202K)
  • In Keun LEE, Soon Hak KWON
    Type: LETTER
    Subject area: Biocybernetics, Neurocomputing
    2010 Volume E93.D Issue 10 Pages 2883-2886
    Published: October 01, 2010
    Released: October 01, 2010
    JOURNALS FREE ACCESS
    Fuzzy cognitive maps (FCMs) are used to support decision-making, and the decision processes are performed by inference of FCMs. The inference greatly depends on activation functions such as sigmoid function, hyperbolic tangent function, step function, and threshold linear function. However, the sigmoid functions widely used for decision-making processes have been designed by experts. Therefore, we propose a method for designing sigmoid functions through Lyapunov stability analysis. We show the usefulness of the proposed method through the experimental results in inference of FCMs using the designed sigmoid functions.
    Download PDF (144K)
feedback
Top