Production systems are one of the most widely used models of knowledge representation and application. They have demonstrated their success as a software technology for solving ill-structured problems.Though they have been implemented successfully in many domains, building these systems is a challenging task because of the ill-defined nature of the applications and the novel nature of the technology. Since the architecture of production systems is dissimilar to typical procedural software, lots of conventional software evaluation techniques cannot be available easily to them. So, there has been a demand for a practical verification systems which help the knowledge-base designer to develop production systems, and to assure the validity of them. This paper presents two verification systems that can be used to verify the dynamic properties of OPS5, which is the most general production system arguably. One is a system which predicts the sequence of rules that must fire to archieve given specific goal state concerned with a final state or an intermediate state representing a meaningful advancement in reaching a final state. The prediction is performed at compile-time, only with static informations and a set of declarations for the objective goal state. This system can generate the information to determine whether the manner in which the developed production system will pursue given goal state at run-time is valid according to the specifications and expectations of the knowledge-base designer. The other is a system which automatically detects some exceptions which may occur at run-time by using meta-knowledges. Meta-knowledges are constructed by the knowledge-base designer in conjunction with the domain expert implicitly. They represent his design or semantic specification of the knowledge-base, such as valid range of attribute values, range of attribute values requested by the knowledge-base and unexpected inconsistent states. This system is implemented additionally to OPS5 framework. The detection is performed at run-time and is taken in every inference-cycle. By introducing both of two systems described in the above into the development process of production systems, the costs of the knowledge-base designer are reduced, and it becomes possible that to some extentreliability is guaranteed in the system which was finally completed.
One most common problem in reinforcement learning systems (e.g. Q-learning) is to reduce the number of trials to converge to an optimal policy. As one of the solution to the problem, k-certainty exploration method was proposed. Miyazaki reported that this method could determine an optimal policy faster than Q-learning in Markov decision processes (MDPs). This method is very efficient learning method. But, we propose an improvement plan that makes this method more efficient. In k-certainty exploration method, in case there is no k-uncertainty rule (rules which aren’t selected more than k times) in current state, an agent sometimes walks randomly until it finds a state where it can select a k-uncertainty rule. We think that this behavior is not efficient. To reduce this useless behavior, we propose combining k-certainty exploration method with Dynamic programming (DP). Miyazaki’s system uses DP after all rules are executed at least k times. But, our method uses k-certainty exploration method along with DP during learning. Our method, takes two pattern actions. In case an agent can select k-uncertainty rules, one of these rules is selected at random as k-certainty exploration method. In another case there is no k-uncertainty rule, behavior of agent is different from the behavior of k-certainty exploration method. In that case, our method uses DP to compute an optimal policy for moving from a current state to a state in which some k-uncertainty rules remain. The model for DP is constructed by using only known states. The outline will be described below. First, an agent makes a map constructed by using only known states. In the map, goals are states in which there are k-uncertainty rules and arbitrary state values are set in these states. Point to which attention should be paid is that the map is not given from outside. It is made from only experience. Next, each state values of states in which there are only k-certainty rules in the map are computed by DP (we used Policy Iteration). Finally, an agent continues to select greedy action until it arrives at a state in which it can select a k-uncertainty rule. By this improvement, we expect it can determine an optimal policy faster than k-certainty exploration method. And we have verified that our exploration method can determine an optimal policy faster than k-certainty exploration method by computer simulation.
We have developed a new method for Japanese-to-English translation of tense, aspect, and modality that uses an example-based method. In this method the similarity between input and example sentences is defined as the degree of semantic matching between the expressions at the ends of the sentences. Our method also uses the k-nearest neighbor method in order to exclude the effects of noise; for example, wrongly tagged data in the bilingual corpora. Experiments show that our method can translate tenses, aspects, and modalities more accurately than the top-level MT software currently available on the market can. Moreover, it does not require hand-craft rules.
Inductive Logic Programming(ILP) has been drawn a big attention as a new research area of inductive inference based on first order logic. The main advantages of ILP are the very rich expressive power and the capability of utilizing arbitrary background knowledge represented in Prolog programs. However ILP usually needs enormous computational time to obtain the hypotheses. To cope with this problem, an efficient algorithm is needed. In this paper, an ILP system Progol is adopted as the target system. Progol is one of the most successful ILP systems, which induces hypotheses by first constructing the most specific hypothesis(MSH) for a given example, and then searching through the subsumption lattice having empty clause and MSH as its top and bottom respectively. The second phase of Progol employes an A*-like search algorithm which is a kind of exhaustive search base on A* search strategy. In this paper, we show redundancy in A*-like algorithm from a viewpoint of imput/output relations amoung variables in the literals forming the MSH, and then propose a new search algorithm to remove the redundancy. The main advantage of the proposed algorithm is a substantial reduction of search space. By preprocessing the given search space, the redundant parts in the space can be removed. Since the search in the search space in which there is no answer can be avoided in this way, we have succeeded in reducing the number of candidate hypotheses to be generated as well as the whole computational time to induction.
Many stochastic search algorithms have recently been developed to make more remarkable progress than systematic search algorithms because stochastic algorithms sometimes solve large-scale constraint satisfaction problems in a practical time. However, such stochastic algorithms have the drawback of getting stuck in local optima which are not acceptable as final solutions. We analyze an iterative improvement algorithm from the viewpoint of constraint structures causing local optima. Using the graph-coloring problem with three colors, an archetype problem to evaluate constraint satisfaction algorithms, we study the local graph structures around which so many conflicts thrash. We propose a key constraint structure, an LM pair, which may induce a local optimum, and clarify the mechanism of how conflicting colors for an LM pair obstruct the stepwise refinement of hill-climbing. Experimental results show that LM pairs are strongly correlated with the search efficiency of the stochastic search algorithm.
In this paper, we develop an organization method of page information agents for adaptive interface between a user and a Web search engine. Though a Web search engine indicates a hit list of Web pages to user’s query using a large database, they includes many useless ones. Thus a user has to select useful Web pages from them with page information indicated on the hit list, and actually fetch the Web page for investigating the relevance. Unfortunately, since the page information on a hit list is neither sufficient nor necessary for a user, the adequate information is necessary for valid selection. However which information is adequate depends on a user and a task. Hence we propose adaptive interface AOAI in which different page information agents are organized through man-machine interaction. In AOAI, the page information agents indicating different page information on a hit list like the file-size, network traffic and a page title are prepared at first. A user evaluates them through searching with a search engine, and the agents are organized based on the evaluation. As results, different organizations are achieved depending on a user and a task. Finally we make experiments with subjects and find out AOAI is promising for adaptive interface between a user and a search engine.
In games where the average number of legal moves is too high, it is not possible to do full-width search to a depth sufficient for good play. A way to achieve deeper search is to reduce the number of moves to search. In this paper a new method for Plausible Move Generation (PMG)will be presented that considerably reduces the number of search candidates. This plausible move generation method will be applied to shogi. We will present different types of plausible move generators for different types of moves, based on the static evaluation of a shogi position. Test results show that in shogi this set of plausible move generators reduces the number of moves to search by 33.2% on average. Plausible move generation is still very accurate: 99.5% of all expert moves in 12097 test positions were generated by our method. Search based on plausible move generation has also been compared with search without plausible move generation. First, in 298 tactical shogi problems, using plausible move generation increased the number of solved problems with 34%. Second, in a self-play experiment a shogi program based on plausible move generation beat a shogi program based on full-width search in 80% of the games.
In this paper, we discuss a planning and plan recognition approach to generate advice in a Micro- World. A Micro-World should be able to guide a learner who is in impasse. When a learner meets some trouble, a Micro-World should guide the learner by giving some advices. In order to generate appropriate advice, it should have an ability to construct a correct plan to achieve the learner’s goal, and to recognize the learner’s plan by observing the learner’s actions. Therefore we discuss the ability of planning and plan recognition in a Micro-World. We point out some problems concerning to unobservable actions and bad effectual actions, and propose methods to solve the problems. Then we introduce our experimental system. We take chemistry as our domain subject and the system can let a learner learn a chemical experiment with acid-base reactions.
Cellular Automata (CA) are ideally suited for parallel processing, and have been characterized as easy to parallelize. Therefore their simulation has prospects of speeding up using SIMD (Single Instruction stream, Multiple Data stream). Furthermore a low-cost CPU has been had SIMD such as MMX technology. MMX technology is a kind of SIMD what permits one instruction cycle to act on multiple data pieces and is SIMD what may be one of the most famous technologies. In this paper, we propose a method of high-speed CA simulation using MMX technology without dedicated purpose hardware. The results of simulations represent that our method is better with 10 times than scalar arithmetic.
Up to now, the research on the automation of object-oriented analysis, especially extracting objectoriented design elements from the problem specification written in Japanese, has been continued in the Harada laboratory since 1993. As this first process, we have developed the semantic analysis system SAGE which could be practically useable both in the performance and in the accuracy. Given a dependency tree, where clauses constituting a sentence are related by dependency arcs, SAGE searches the EDR electronic dictionary, retrieves for any two clauses connected by a dependency arc the meaning of the principal word in each clause and the deep case between such two words, and assigns the probability of such meaning-case tuple. Then, SAGE constructs an interpretation tree by allocating such meaning-case tuple and its probability to each arc in the dependency tree. Next, SAGE searches for the allocation having the maximum of the overall evaluation value given by the sum of the probability of the allocated meaning and cases. Finally, SAGE converts the resulting interpretation tree into the set of semantic frames containing the information of each word and relations with other words. In developing the system, we achieved speed-up of the construction of the interpretation tree by reducing the search space with pruning useless meaning-case tuples and by using the branch and bound method. Moreover the accuracy improvement of the analysis was achieved by applying the following four methods: (A)in constructing the interpretation tree, assigning 0 probability to all the combination of word meanings with which there are no “case” information in the concept description dictionary, (B)using the experimental rules to presume the deep cases from the surface cases to each dependency between verb clauses, (C)improving the fitness of the sentences retrieved from corpus by using part of speech, and (D)decreasing the number of meaning candidates by using reading information. As a result, the average interpretation construction time of one sentence with nine clauses or less was 2 seconds on a PC with the Pentium III processor using 320MB memory. The correct answer rate of the meaning was 82.1%, and that of the case was 77.8%.
The objective of this article is to provide the basic formulation of the affordance of environment. Study on affordance has been mostly focusing on the significance of perception, behavior and workspace, while leaving the problem of application unaddressed. Using the proposed method, it is possible to apply reinforcement learning algorithm on the robot within a certain environment, making the abstraction of affordance of the environment with interaction between the reinforcement learning agent and the environment available. Conclusion is made in the latter part of the paper that the percipient(robot) should simplify the number of perception in order to get enough valid equivalence relationship which abstracts affordance from environment with in the limit of incomplete perception; and the structure of the environment(workspace) would restrict the robot’s behavior. The prospect of this study, therefore, focuses on the interactive processes between the robot and the workspace from which the robot could set up it’s perception for particular tasks, and on how the robot could continuously manage it’s perception.
A system with a see-through method is proposed, that provides explanations of electorical device’s(CD player) operation. In a situation that we are explained how to operate an electorical device, we would like to operate devices actually along with the explanation. See-through method is one of the powerful method for this purpose. This method augments the world by adding some Computer Graphics to the user’s view. So we can get more information from the view of augmented world rather than real world. To generate the CG correctly, the registration system is required, which tracks the user’s viewpoints, the object location and so on, and decides the coordinates of CG. For this purpose, our system uses only the video image from video camera. It makes the system simple and easy to use. The system is constructed based on two ideas, 1)acquisition of operator network that means operation sequence by planning method, 2)generation of surface expression(text, graphics and so on) from operator network through intermediary expression - Describe Action (DA). Operator network contains some operator which means operation of device, and some state description which means state of device. These are expressed on the explanation with text and the view of augmented world according to user preference and environment. The network transrate to DAs, and each DA generates surface expression.
In this paper, we propose a novel method that enables automatically modeling of time series virtual cities. We apply cellular automata (CA) to lay out many buildings, and genetic algorithm (GA) to produce time series change of virtual cities. We produce virtual cities by giving cellular automata several states: vacant ground, a variety of buildings, two kinds of road. GA determines the sequence of applied rules so as to generate the virtual city required by users. Simulation models using CA have been developed to predict urban growth, but a large amount of historical data is needed to calibrate the system and a method to generate original cities is not discussed. We have developed a method that uses artificial life techniques to model original virtual cities that have the characteristics of actual cities. Examples of virtual cities verify followings. We can generate four types of virtual cities: a uniform city, a random city, an ordering city, and a city that has several areas as actual cities. The GA search of a sequence of rules works well to producing various types of virtual cities that users need and time series of changing virtual cities.
Designing soccer agents operating on the Soccer Server has became a standard problem in the multiagent domain, and this paper describes the soccer agents that can learn to make use of cooperative tactics. Considering the ways actual coaches of soccer enable their players learn to execute the soccer tactics, we developed a method of agents’ learning to distinguish good tactics from not-so-good tactics. It is made up mainly of small practical tasks requiring a few agents, of acquisition of appropriate cognitive maps by decomposing the situations into grid information, and of optimization of total play by a kind of adaptive learning. Because the agents perceive the environment as a grid, they have a finite number of condition spaces and are able to predict the behavior of opponents by learning the conditonal probabilities. Each condition has its own utility learned in an evolutionary method.
State generalization problem is a significant issue for the realization of the autonomous agents which are expected to decide and learn the proper behavior with various kinds of sensor information. This paper proposes a new state generalization method based on maximum likelihood estimation of the agent’s behavior outcomes. This provides a general framework for unifying the various conventional heuristic generalization criteria which have been used in the previous works, and a way of adapting the state space gradually to the environment.
This paper describes the application of relational learning to interactive document retrieval. In this model, retrieval systems support users to find documents effectively through relevance feedback. At present vector space model is a typical representation method to realize relevance feedback. However it can neither express relationship such as proximity nor keep several features separately. We supplement these defects with a set of rules, which are constructed by relational learning and used to identify relevant documents. The learning algorithm consists of separate-and-conquer strategy and top-down heuristic search with limited backtracking. Background relations are made only from keywords, thus constructed rules represent useful keyword combinations to search relevant documents. We evaluate the effectiveness of our approach on a document retrieval experiment using a test bed database. The results show our method enhances both effectiveness and efficiency compared to a normal method with only query vector. We finally consider the effect and the cost of rule making.
In this paper, we perform theoretical analysis and experiments on the Simplex Crossover (SPX), which we have proposed. Real-coded GAs are expected to be a powerful function optimization technique for real-world applications where it is often hard to formulate the objective function. However, we believe there are two problems which will make such applications difficult; 1) performance of real-coded GAs depends on the coordinate system used to express the objective function, and 2) it costs much labor to adjust parameters so that the GAs always find an optimum point efficiently. The result of our theoretical analysis and experiments shows that a performance of SPX is independent of linear coordinate transformation and that SPX always optimizes various test function efficiently when theoretical value for expansion rate, which is a parameter of SPX, is applied. We also show that BLX-α is equivalent to degenerate form of SPX. Experiments show that we have something misunderstood effect of epistasis on performance degradation of real-coded GAs.
In this paper, we apply Inductive Logic Programming (ILP) to acquire graphic design knowledge. Acquiring design knowledge is a challenging task because such knowledge is complex and vast. We thus focus on principles of layout and constraints that layouts must satisfy to realize automatic layout generation. Although we do not have negative examples in this case, we can generate them randomly by considering that a page with just one element moved is always wrong. Our nonmonotonic learning method introduces a new predicate for exceptions. In our method, the ILP algorithm is executed twice, exchanging positive and negative examples. From our experiments using magazine advertisements, we obtained rules characterizing good layouts and containing relationships between elements. Moreover, the experiments show that our method can learn more accurate rules than normal ILP can.