人工知能学会論文誌
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
16 巻, 2 号
選択された号の論文の16件中1~16を表示しています
ショートノート
  • 蜷川 繁, 米田 政明, 広瀬 貞樹
    原稿種別: 技術報告
    専門分野: その他
    2001 年 16 巻 2 号 p. 164-166
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    The “Game of Life” acquires the property of significant behavior, such as universal computation, selforganized criticality and 1/f fluctuation, which depends on initial configurations. Our research investigates the relationship between the transient behavior starting from random initial configurations and array size in the Game of Life. The simulations show that the average transient time ‹T› increases logarithmically with square array size N×N, ‹T›∼logN in null and periodic boundary conditions. This result suggests that the duration of 1/f fluctuation in the “Game of Life” lengthens infinitely in infinite array size.
論文
  • 酒井 健作, 大政 崇, 西原 清一
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 167-174
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    A geometric constraint solver for finding legal configurations for an under-constrained set of geometric components is proposed. While making drawings interactively, the user usually specifies few geometric constraints explicitly because some constraints are not clear to him- or her-self, or it is not practical to specify all constraints at any early design stage. Theoretically, the full geometric constraints are necessary to define a unique layout of every geometric components, but it is naturally not given throughout the process. Therefore, in such an under-constrained situation, the lack of constraints must be supplemented properly to determine the final layout of components remaining undefined. For this purpose, we propose a geometric constraint solver that works in two phases: the former is to satisfy all the explicit constraints imposed by the user, and the latter is to choose appropriate value instances for every geometric component whose layout is not yet determined. We implemented a constraint-based interactive system for designing line drawings which is composed of the solver and the interactive module to let the user input geometric modi.cation commands incrementally, and proved its effectiveness by some experiments.
  • 騙しのある多峰性関数の最適化
    高橋 治, 木村 周平, 小林 重信
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 175-184
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    Biologically inspired Evolution Algorithms (EAs), that use individuals as searching points and progress search by evolutions or adaptations of the individuals, are widely applied to many optimization problems. Many real world problems, which could be transformed to optimization problems, are very often difficult because the problems have complex landscapes that are multimodal, epistatic and having strong local minima. Current real-coded genetic algorithms (GAs) could solve high-dimensional multimodal functions, but could not solve strong deceptive functions. Niching GAs are applied to low-dimensional multimodal functions by maintaining diversity of searching population, but could not be applicable to highdimensional functions. In order to optimize high dimensional deceptive multimodal functions, we propose a new EA called Adaptive Neighboring Search (ANS), that is structured with a selection for reproduction by restricting mating individuals to neighbors, a crossover-like mutation (XLM) using the mating individuals and an elitist selection for survival within one centered parent and its offsprings. By individualized generation alternation and complementary crossover-like mutation, the ANS realizes self-distributive and locally adaptive search, and individuals in the search divide into plural promising valleys and converge within same valley. The ANS is applicable to high dimensional deceptive multimodal function optimization, because the feature is independent of number of problem’s dimensions. By applying to high dimensional Fletcher and Powell function as a deceptive multimodal one, we show the ANS can obtain various solutions and several optimal solutions in high probability.
  • 宮崎 和光, 坪井 創吾, 小林 重信
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 185-192
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    Reinforcement learning is a kind of machine learning. It aims to adapt an agent to a given environment with a clue to rewards. In general, the purpose of reinforcement learning system is to acquire an optimum policy that can maximize expected reward per an action. However, it is not always important for any environment. Especially, if we apply reinforcement learning system to engineering, environments, we expect the agent to avoid all penalties. In Markov Decision Processes, a pair of a sensory input and an action is called rule. We call a rule penalty if and only if it has a penalty or it can transit to a penalty state where it does not contribute to get any reward. After suppressing all penalty rules, we aim to make a rational policy whose expected reward per an action is larger than zero. In this paper, we propose a suppressing penalty algorithm that can suppress any penalty and get a reward constantly. By applying the algorithm to the tick-tack-toe, its effectiveness is shown.
  • 後藤 匡史, 長木 悠太, 鈴木 英之進
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 193-201
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    This paper presents a novel decision-tree induction for a multi-objective data set, i.e. a data set with a multi-dimensional class. Inductive decision-tree learning is one of the frequently-used methods for a single-objective data set, i.e. a data set with a single-dimensional class. However, in a real data analysis, we usually have multiple objectives, and a classifier which explains them simultaneously would be useful. A conventional decision-tree inducer requires transformation of a multi-dimensional class into a singledimensional class, but such a transformation can considerably worsen both accuracy and readability. In order to circumvent this problem we propose a bloomy decision tree which deals with a multi-dimensional class without such transformations. A bloomy decision tree consists of a set of decision nodes each of which splits examples according to their attribute values, and a set of .ower nodes each of which decidesa dimension of the class for examples. A flower node appears not only at the fringe of a tree but also inside a tree. Our pruning is executed during tree construction, and evaluates each dimension of the class based on Cramér’s V. The proposed method has been implemented as D3-B (Decision tree in Bloom), and tested with eleven benchmark data sets in the machine learning community. The experiments showed that D3-B has higher accuracies in nine data sets than C4.5 and tied with it in the other two data sets. In terms of readability, D3-B has a smaller number of decision nodes in all data sets, and thus outperforms C4.5. Moreover, experts in agriculture evaluated bloomy decision trees, each of which is induced from an agricultural data set, and found them appropriate and interesting.
  • 越野 亮, 林 貴宏, 木村 春彦, 広瀬 貞樹
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 202-211
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    A hypothetical reasoning, which can find an explanation for a given set of observation by assuming some hypothetical sets, is a useful knowledge processing framework because of its theoretical basis and usefulness for practical problems such as diagnostics, scheduling, and design. It is, however, known to be computationally very expensive for large problems because it must deal with incomplete knowledge. Predicate-logic allows powerful and compact knowledge representation compared to propositional-logic. Efficient methods to proceed reasoning have been developed for propositional-logic. However it would be desirable to develop similar methods for predicate-logic, but this has so far proved difficult. “KICK-HOPE” is one of hypothetical reasoning systems for predicate-logic. KICK-HOPE, which is based on a deductive database technique called the QSQR method and a basic fast hypothetical reasoning system called ATMS(Assumption-based Truth Maintenance System), made advances in the speed of reasoning. However, for a practical knowledge base, ‘the inconsistent process’ and ‘the subsumption process’ still consume much calculation time. In this paper, we investigate an e.cient method of the inconsistent and subsumption processes of KICK-HOPE. First, we show most of these processes are vain and omissible. Second, to omit vain processes, we introduce bit-vector(check-vector) to represent what hypotheses include in hypothetical sets. And, we improve that vain processes will be reduced to check the check-vector. Finally, we examine the proposed system with two examples, a diagnostic problem of a logic circuit and a scheduling problem. It is shown to be more efficient than the original KICK-HOPE for all problems.
  • 森田 千絵, 柿元 満, 菊池 吉晃, 月本 洋
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 212-219
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    As a result of the ongoing development of non-invasive analysis of brain function, detailed brain images can be obtained, from which the relations between brain areas and brain functions can be understood. The relations between brain areas and brain functions are described by rules. Knowledge discovery from functional brain images is knowledge discovery from pattern data, which is a new field different from knowledge discovery from symbolic data or numerical data. We have been developing a new method called Logical Regression Analysis. The Logical Regression Analysis consists of two steps. The first step is a regression analysis. The second stepis rule extraction from the regression formula obtained by the regression analysis. In this paper, we apply the Logical Regression Analysis to functional brain images to discover relations between a brain function and brain areas. We use nonparametric regression analysis as a regression analysis, since there are not sufficient data to obtain linear formulas using conventional linear regression from functional brain images. Experimental results show that the algorithm works well for real data.
ショートノート
  • 宇野 富美子, 宇野 洋二
    原稿種別: 技術報告
    専門分野: その他
    2001 年 16 巻 2 号 p. 220-224
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    It is observed that the context of arguments sometimes deviates in debates and disputes. The subjects and predicates used in cross debates are similar but different between two opposing arguments. In this paper, we deal with debates and propose a fundamental procedure for examining the context of arguments. The procedure consists of two practical processes; one extracts constructive arguments and refutations (or replies), and the other examines the context between them. The procedure of examining the context is executed by the verification of words in the propositions of constructive arguments and refutations. The context between the propositions in which the verified words do not agree with each other is considered to be broken, and the proposition for the context is judged to be invalid.
論文
  • 平田 高志, 村上 晴美, 西田 豊明
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 225-233
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    We developed a system called CoMeMo-Community that supports community knowledge sharing, which uses a Talking-Virtualized-Egos (TVE) metaphor. The virtualized ego is a software agent that functions as an alter-ego. It works as a medium for conveying one’s knowledge to others by interacting with other visualized egos or community members over a network. This interaction among virtualized egos visualizes knowledge interaction in a community. We performed an experiment to examine the following three features: (a)how people generate associative representation from ideas, (b)how community knowledge evolves by people expressing their own and others’ ideas clearly, and (d)how people act in such a knowledgecreating process.The subjects were 45 students who took a lecture on artificial intelligence at our institute. In this four-week experiment, we observed the following: (a)Community knowledge building: Some subjects created new information based on the information published using other subjects. (b)Change of topic: Althogh most of the published information was about “Nara” at first, as time passed, the topic changed to “agent” and “computer”. (c)Creation of new human relationships: One subject was able to make friends using the system.
  • 長谷川 隆三, 藤田 博, 越村 三幸
    原稿種別: 研究論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 234-245
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    We present an efficient method for minimal model generation. The method employs branching assumptions and lemmas so as to prune branches that lead to nonminimal models, and to reduce minimality tests on obtained models. Branching lemmas are extracted from a subproof of a disjunct, and work as factorization. This method is applicable to other approaches such as Bry’s constrained search or Niemelä’s groundedness test, and greatlyimpro ves their efficiency. We implemented MM-MGTP based on the method. Experimental results with MM-MGTP show a remarkable speedup compared to MM-SATCHMO.
特集論文
  • Kazuyuki Tanaka
    原稿種別: Special
    専門分野: Others
    2001 年 16 巻 2 号 p. 246-258
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    In image restorations, two different mathematical frameworks for massive probabilistic models are given in the standpoint of Bayes statistics. One of them is formulated by assuming that a priori probability distribution has a form of Gibbs micro-canonical distribution and can be reduced to a constrained optimization. In this framework, we have to know a quantity estimated from the original image though we do not have to know the con.guration of the original image. We give a new method to estimate the length of boundaries between areas with different grey-levels in the original image only from the degraded image in grey-level image restorations. When the length of boundaries in the original image is estimated from the given degraded image, we can construct a probabilistic model for image restorations by means of Bayes formula. This framework can be regarded as probabilistic information processing at zero temperature. The other framework is formulated by assuming the a priori probability distribution has a form of Gibbs canonical distribution. In this framework, some hyperparameters are determined so as to maximize a marginal likelihood. In this paper, it is assumed that the a priori probability distribution is a Potts model, which is one of familiar statistical-mechanical models. In this assumption, the logarithm of marginal likelihood can be expressed in terms of free energies of some probabilistic models and hence this framework can be regarded as probabilistic information processing at finite temperature. The practical algorithms in the constrained optimization and the maximum marginal likelihood estimation are given by means of cluster variation method, which is a statistical-mechanical approximation with high accuracy for massive probabilistic models. We compare them with each others in some numerical experiments for grey-level image restorations.
  • 田中 和之
    原稿種別: 特集論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 259-267
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    Compound Gauss-Markov random field model is one of Markov random field models for natural image restorations. An optimization algorithm was constructed by means of mean-field approximation, which is a familiar techniques for analyzing massive probabilistic models approximately in the statistical mechanics. Cluster variation method was proposed as an extended version of the mean-field approximation in the statistical mechanics. Though the mean-field approximation treat only the marginal probability distribution for every single pixel, the cluster variation method can take acount into the correlation between pixels by treating the marginal probability distribution for every nearest neighbor pair of pixels. In this paper, we propose a newstatistical-mechanical iterative algorithm by means of the cluster variation method for natural image restorations in the compound Gauss-Markov random field model. In some numerical experiments, it is investigate howthe proposed algorithm improves the quality of restored images by comparing it with the algorithm constructed from the mean-field approximation.
  • 和田 卓也, 元田 浩, 鷲尾 隆
    原稿種別: 特集論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 268-278
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    A Knowledge Acquisition method “Ripple Down Rules” can directly acquire and encode knowledge from human experts. It is an incremental acquisition method and each new piece of knowledge is added as an exception to the existing knowledge base. This knowledge base takes the form of a binary tree. There is another type of knowledge acquisition method that learns directly from data. Induction of decision tree is one such representative example. Noting that more data are stored in the database in this digital era, use of both expertise of humans and these stored data becomes even more important. In this paper, we attempt to integrate inductive learning and knowledge acquisition. We show that using the minimum description length principle, the knowledge base of Ripple Down Rules is automatically and incrementally constructed from data and thus, making it possible to switch between manual acquisition by a human expert and automatic induction from data at any point of knowledge acquisition. Experiments are carefully designed and tested to verify that the proposed method indeed works for many data sets having different natures.
  • Yukito Iba
    原稿種別: Special
    専門分野: Others
    2001 年 16 巻 2 号 p. 279-286
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    We give a cross-disciplinary survey on “population” Monte Carlo algorithms.In these algorithms, a set of “walkers” or “particles” is used as a representation of a high-dimensional vector. The computation is carried out by a random walk and split/deletion of these objects. The algorithms are developed in various fields in physics and statistical sciences and called by lots of different terms — “quantum Monte Carlo”, “transfer-matrix Monte Carlo”, “Monte Carlo filter (particle filter)”, “sequential Monte Carlo” and “PERM” etc. Here we discuss them in a coherent framework. We also touch on related algorithms —genetic algorithms and annealed importance sampling.
  • Ryotaro Kamimura, Taeko Kamimura, Thomas R. Shultz
    原稿種別: Special
    専門分野: Others
    2001 年 16 巻 2 号 p. 287-298
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    In this paper, we propose a new information theoretic method for competitive learning, and demonstrate that it can discover some linguistic rules in unsupervised ways more explicitly than the traditional competitive method. The new method can directly control competitive unit activation patterns to which input-competitive connections are adjusted. This direct control of the activation patterns permits considerable flexibility for connections, and shows the ability to detect salient features not captured by the traditional competitive method. We applied the new method to a linguistic rule acquisition problem. In this problem, unsupervised methods are needed because children learn rules without any explicit instruction. Our results confirmed that the new method can give similar results as those by the traditional competitive method when input data are appropriately coded. However, we could see that when unnecessary information is given to a network, the new method can filter it out, while the performance of the traditional method is degraded by unnecessary information. Because data in actual cognitive and engineering problems usually contain redundant and unnecessary information, the new method has good potential for discovering regularity in actual problems.
  • 上田 修功
    原稿種別: 特集論文
    専門分野: その他
    2001 年 16 巻 2 号 p. 299-308
    発行日: 2001年
    公開日: 2002/02/28
    ジャーナル フリー
    When learning a nonlinear model, we suffer from two difficulties in practice: (1) the local optima, and (2) appropriate model complexity determination problems. As for (1), I recently proposed the split and merge Expectation Maximization (SMEM) algorithm within the framework of the maximum likelihood by simulataneously spliting and merging model components, but the model complexity was fixed there. To overcome these problems, I first formally derive an objective function that can optimize a model over parameter and structure distributions simultaneously based on the variational Bayesian approach. Then, I device a Bayesian SMEM algorithm to e.ciently optimize the objective function. With the proposed algorithm, we can find the optimal model structure while avoiding being trapped in poor local maxima. I apply the proposed method to the learning of a mixture of experts model and show the usefulness of the method.
feedback
Top