In this paper, we develop a functional theory of emotions. Emotions are described as changes in the state of readiness for maintaining or modifying particular forms of relationship with the environment, and as control shifts in favor of those states of readiness, and the behaviors flowing from that readiness. Emotional feelings consist of the subjective awareness of those changes in state of readiness, and of the events eliciting them. Emotions are elicited by events that are appraised as relevant to the individual's concerns. Concerns can be understood as representations of the basis for the individual's preferences or preferred states of the world. Detection of relevance of events for the concerns is function of the emotions. This emotional relevance detection proceeds in parallel with other activities, such as the execution of tasks. Emotion thus may interrupt ongoing activities, and emotion may be suppressed in favor of such activities. In order to model such processes, an architecture has been developed consisting of a set of parallel-operating, semi-independent modules, communicating by way of a central blackboard; modules take their information from the blackboard, and/or deposit their results on it. A program, called “Will”, embodying the model and architecture, is under development.
The use of computers for translating human affective communication into symbolic form, and for conveying rudimentary simulated emotions, has been little explored. In this short article we introduce an emotion reasoning engine called the Affective Reasoner based, in part, on the ideas of Ortony et al. (1988), along with its recent multi-media extensions, with which we attempt to address this problem. In the paper we suggest that users may best be maximally engaged with the computer, for certain tasks, by taking advantage of what appear to be innate human tendencies toward social, and emotional, interchange. We discuss two preliminary areas of exploration, based on speech recognition technology, in which we develop the spoken communication lexicon between user and computer. The first of these has to do with parsing emotion inflection in simple human utterances. The second has to do with interactively extending a base lexicon of spoken phrases that includes 198 emotion words, as well as tokens describing relationship, mood, and emotional intensity, so that users may add simple non-emotion tokens of their own choosing. This allows them to communicate about emotion situations, in diverse domains, without programmer intervention. Lastly we discuss emotionally expressive channels the multi-media computer has at its disposal. Among these are rudimentary emotionally inflected speech; indexed sub-second access to affect-inducing, and enhancing, music; schematic facial expression supporting over 60 expressions, as well as over 3000 dynamically constructed morphs; and explicit emotion content in utterances generated by the underlying emotion engine. The current implementation runs on an IBM (compatible) PC while still maintaining sub-second response time to (spoken) user utterances, to dynamic generation of new content in spoken text, to morphed changes in facial expression, and to retrieval and presentation of affect-bearing music, constraints we consider essential for supporting plausible affective interaction with the user.
The urge theory, that has been developed by the author, intends to achieve a comprehensive model of human emotion, cognition, and individual and social behaviors. Any model that pursues as remote a goal as this has to employ, as a means to its verification, computational formalizations of whatever parts of the theory that allow them. In this paper, a few possibilities of such partial computational formalization are demonstrated, even though none of them is hardly complete as yet. The urge theory starts with an explication of emotions. Because of the inherent ambiguity of the everyday notion of emotions, the theroy introduces three basic concepts of its own: urge activity plan, mood-state, and emotional attitude. The major content of this paper consists of, first laying foundational remarks on these three major concepts, and then going on somewhat more in detail to discuss appraisal, attention and the structure of the urge activity plan. With this last topic, a new concept, the versatile system structure, an elaboration of the idea used by Minsky in his Society of the Mind model, is introduced.
The study of emotion has always been a difficult and controversial subject. The main reason seems to be that in spite of the many years of investigation and the abundance of studies aiming at understanding human emotions, virtually no consensus could be reached. In this paper we argue that there are fundamental underlying problems and that a radically different approach is needed. We propose a new one, called the “New Fungus Eater Approach”. It is illustrated by experiments with autonomous robots, the “New Fungus Eaters”, which are named after their predecessor, the “Solitary Fungus Eater” invented by Masanao Toda in 1961. This approach is outlined and it is demonstrated how it can contribute to understanding the foundations of what one might want to call emotional processes. It is also discussed how some of the basic controversies “disappear” in this way.
This paper reports a case study of a logical inference model of emotion applied to ‘chagrin.’ Chagrin caused by a failure in executing a plan is examined from a problem solving view point. When a state of chagrin arises, people try to be released from it by means of behavior patterns such as those depicted in Aesop's fable of sour grapes. In additon, a logical dependency relation among knowledge units employed in this process is described.
Visual search tasks to discover target in distractors are used for performance test of early vision. Fast and spatially parallel detection is taken as evidence that features in question are coded early in the visual process. Targets that are defined by conjunctions of features are usually found through a serial process of checking and rejecting distractors. The search time increases linearly with the number of distractors, suggesting that attention must be focused to each item in turn in order to conjoin features. Based on these recent studies of visual search, a model of visual attention is proposed. It is assumed that a visual image is encoded in a multi-resolution pyramid and attention function selects a sampling area from the pyramid. The function, guided by top-down and bottom-up mechanisms, give priority to sampling. New sampling is reconstructed by combining with previous samplings so that the reconstruction is as visually recognizable as possible at any moment. A computer simulation of the model produces the same general characteristics as human.