1990 Volume 5 Issue 3 Pages 333-342
The ATMS is widely used in various subfields of AI. Although the ATMS provides more functionalities more efficiently than other justification-based TMS, its still poor performance limits its applications. This paper describes a parallel implementation of the ATMS, called AMI, for a shared-memory multiprocessor to improve its performance. Our implementation introduces a new type of node, called a Justification-node, to the ATMS network. The Justification-node not only gives a source of parallelism, but also provides the means to implement some kinds of hyperresolution and to control the update of the ATMS network. Processing a Justification-node is a good source of parallelism, but there is a large variation in the execution time of the ATMS commands as far as more than twenty trace files of real ATMS applications, e. g., qualitative simulations, are analysed. Hence, we decompose the task into four levels of granularity, from concurrent ATMS commands (i. e. coarse granularity) to concurrent implementations of the ATMS commands (i. e. fine granularity), and control the granularity of parallelism at runtime. Runtime control of granularity is encoded in a parallel Lisp called QLISP, because it enables the user to control the spawning of processes at runtime. The resulting Parallel AMI using 4 processors attains up to 3.5-fold speed-up for benchmarks tried so far.