Dual-process theorists posit that human thinking involves two kinds of mental processing: System 1 (an intuitive process), which is generally reliable but can lead to fallacies and biases, and System 2 (a reflective process), which can, at its best, allow human reasoning to follow normative rules. One of the most significant problems with the dual-process theories is whether System 2 can control the outputs of System 1, which are sometimes non-normative. Generally, when the output of System 1 entails a single emotion such as fear, this emotion is less likely to be revised or suppressed by System 2. I introduce two major periods as natural experiments of history: The period between the 17th and 18th centuries in Europe and the period after the Second World War. Both periods are characterized as the time when war, murder, violence decreased and sympathy for the victims and awareness of human rights grew. The prevalence of novels, which enhanced the mindreading (a function of System 1) of victims, was an important factor in the former case. The growth of people’s intelligence (linked to System 2) was an important factor in the latter case. The methodology of the natural experiment in history gives a significant implication to the issue of System 2 control.
The social learning process plays a key role in the emergence of collective intelligence in our society. The recent development of computational frameworks with cognitive modeling has enabled us to mathematically track how people combine their personal experiences and social information. In this article, we first present several variations of social learning processes and the situations in which social learning can be beneficial for each individual. Next, we outline a game-theoretic dilemma that arises from the interdependence between individuals who constitutes a group. As in the “tragedy of the commons” in social dilemmas, rational self-interested individuals could exploit others’ exploratory findings through social learning while behaving as a free-rider in information search. We review how groups of individuals can overcome this challenge and achieve collective intelligence. Finally, we demonstrate how large collectives of rational individuals may spread inaccurate information on the Internet and cause unpredictability in society, such as the diffusion of false information or information cascade. We discuss two possible ways to counter such unintended maladaptive problems in a large-scale society: nudges and algorithmic backups. There have been many works that shed light on how various nudge techniques can mitigate the madness of crowds. Although these efforts are certainly helpful, we argue that interventions based on deeper understanding about algorithms of human decision making may provide more fundamental aid to prevent the spread of false information in our society.
In this paper, we discuss AI’s rationality and explain the rationality of a human-AI system in our adaptive trust calibration. First, we describe AI’s rationality by introducing the formalization of reinforcement learning. Then we explain our adaptive trust calibration which has been developed for rational human–AI cooperative decision making. Safety and efficiency of human–AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user’s reliance behavior and cognitive cues called “trust calibration cues” to prompt the user to execute trust calibration.
This paper reviews and revisits the concept of rationality in the psychology of thinking. First, I consider the ambiguity of the concept of rationality. I point out that this ambiguity is due to (1) the indeterminacy of the normative system itself; (2) differences in the way the task, solver, and environment are perceived; (3) differences in viewpoints such as the theoretical and practical; and (4) the duality of cognitive processes. However, I show that rationality is a goal-dependent concept, and that such ambiguity can mostly be sorted out by the notion of conflict among multiple goals. Next, based on recent findings on reasoning and judgment in autism spectrum disorders, I point out that previous research required “clipped-out” thinking, which is assumed to be rational. Such thinking is non-creative as the goal is a predetermined one given from the outside of the target system. However, since such thinking can deviate greatly from what is rational in the ordinary sense, I point out that an aspect of creativity is essential to the concept of rationality in the event. Finally, I argue that a well-being perspective is indispensable for rational and creative thinking, and that the concepts of self and consciousness are indispensable for acquiring such a perspective.
In this article, we discuss two types of intervention for making judgments and decisions more rationally; nudge and boost. In particular, we discuss characteristics, theoretical background, and problems in the nudge and boost. We argue the importance of multiple perspectives, such as individual, group, and psychological conflict, for making judgments and decisions more rationally.
In cognitive science, ir/rationality about human intelligence has been discussed for a long time. In this paper, we focused on simple heuristics that humans use and reviewed the historical background of the rationality of heuristics in order to understand several perspectives on rationality. Historically, the rationality of heuristics has been discussed mainly based on satisficing (Simon’s bounded rationality), deviations from logical principals (Tversky and Kahneman’s heuristic and bias program), matching between heuristics and environmental structures (Gigerenzer’s ecological rationality), and optimal allocations of cognitive resources (Lieder and Griffiths’s resource rationality). Finally, we discuss possible directions for future research on the rationality of heuristics.
Reasoning what someone else is thinking constitutes intentionality, as expressed by “I think that you think that I am mistaken.” Studies involving adults have found that this understanding of the beliefs of others is limited until reaching the fifth order intentionality. However, is a higher order reasoning, such as the divining fifth order beliefs, likely to be used in everyday life? The current study devised a reasoning task in which both third order and fourth order beliefs could be used in a single task, and it identified the beliefs that are more likely to be used. A task that only used the fourth order beliefs indicated that reasoning about the fourth order beliefs was used correctly. In task where both third and fourth order beliefs could be used, however, most of the participants used the third order beliefs. These results suggest that there is some rationale for why many people use beliefs up to the third order. Reasoning about the third order beliefs impose less of a cognitive load than that about higher order beliefs. These results also suggested that the reasoning which people routinely make is related to the fact that it is not engaging with the fourth order reasoning.
The brain deals with rationality-related processes in multi-dimensional and diverse ways. It is supposed that the brain starts processing such complicated complex cognitive processes immediately after the stimulus appears. Components in event-related potentials can reflect those processes that are underlying them. For example, mismatch negativity (MMN) reflects the violation of a rule established by a sequence of sensory stimuli. N2 components are supposed to be related to cognitive control. P300 components are elicited when stimulus detection engages memory operations. N400 is related to meaning processing. P600 is associated with syntactic and semantic reanalysis processes. Early left anterior negativity (ELAN) is supposed to be a marker of syntactic first-pass parsing. Error-related negativity (ERN) is a negative waveform that arises after a participant makes a detectable error. These components can reflect various rationality-related processes in decision making. In dual process theory, there are two distinctive types of processes: fast and intuitive process, that is System 1 (Type 1), and deliberative and reflective process, that is System 2 (Type 2). It is supposed that the two processes deal with rationality based on their respective characteristics. However, the detailed processes are not clear. For an analysis using ERP components, it is expected that the rational-related processes in the course of time reveal up to about 1000ms.
Religious belief has often been labelled as “irrational belief”; however, in The rationality of heuristic religious belief, Wood (2012) proposed that religion could be understood as a set of heuristic devices that brings sub-optimal solutions to a complex and uncertain world. Wood’s philosophical argument successfully reframed rationality from an adaptive perspective, evaluating whether or not such belief increase adaptability in a natural or social environment; however, since his arguments focused on philosophical issues, there is a need for further investigation with empirical studies and theoretical modeling. In the last few decades, studies in the cognitive and evolutionary science of religion have further accumulated findings to support the view of ‘religion as a set of adaptive heuristic devices.’ Here, we review both the empirical and theoretical literature on religion that could support the adaptive rationality of religious beliefs, specifically focusing on three topics: the adaptive aspects of superstitions, belief in supernatural agents, and rituals. Collectively, findings from these areas support Wood’s view that religion can be rational in a sense of adaptation to ecological and social environments. We also discuss ongoing debates over the replicability of findings in the field and encourage further studies to perform more robust tests of the hypothesis.
Rationality has been one of the most important theoretical concepts in cognitive science. Up until now, plenty of rationality concepts have been proposed to represent human decision-making processes. This paper aims to draw a map of rationality in psychology and cognitive sciences literature by clarifying their theoretical features, functions, and similarities. In doing so, this paper reviewed many rationality concepts proposed in the pieces of literature on higher cognition, including reasoning or judgment and decision making. It explained how these rationality concepts explained biases in human thinking and how they have changed through a history of judgment and decision-making research. This paper pointed out that rationality is now considered the goal of or evidence for cognitive model and that recent rationality concepts treat decision making as resolving trade-off between plural conflicting aims such as expected utility and computational efficiency, through the review of rationality concept in reasoning and judgment and decision-making literature.
Reckless betting is placing a larger bet on a gamble where losing is more likely than winning. It can be considered as an irrational behavior that violates normative rationality. In the present study, we focused on an information processing style derived from the dual process theory and examined the hypothesis that a rational information processing style would inhibit reckless betting. In Study 1 (N = 41), we conducted an exploratory study on the relationship between trait variables including information processing style, and reckless betting. An online experiment of between-subjects comparison design was conducted, where the number of wins and losses in the 1st session of a gambling task was manipulated to induce reckless betting. Participants’ traits, affective states, and recklessness during the gambling task were measured. The results indicated that a rational information processing style inhibited reckless betting by moderating the effect of positive affect on reckless betting. In Study 2 (N = 77), we determined the sample size based on the effect size observed in Study 1 and attempted to replicate Study 1 with a simplified experimental procedure. In addition, Study 3 (N = 75) and Study 4 (N = 76) attempted to replicate Study 1 with variations on the time pressure placed on the task. The results, however, were not consistent with those of Study 1. We discuss the inconsistencies between these results in terms of situational factors specific to online experiments and goal-oriented changes in behavioral norms.
Myside bias is the tendency to actively generate reasons favorable to one’s position and be reluctant to generate reasons unfavorable to one’s position. This study distinguished between quantitative myside bias (the number of reasons in favor of a position exceeds the number of reasons against it) and qualitative myside bias (where a person does not generate valid counterarguments to one’s most important myside reason). The study examines the relationship between myside bias and polarized thinking. In Study 1, university students (𝑛= 75) were asked to write both reasons for and against four topics, and rate their confidence in their choice of position. The results indicated that polarized thinking was intensified after writing the reasons, and qualitative myside bias may be positively related to this tendency. In Study 2, university students (𝑛= 130) were divided into (1) a control condition where they generated reasons as in Study 1, and (2) an experimental condition where they were asked to consider the falsifiability of one’s most important myside reason, and their qualitative myside bias and confidence in their position were compared. The results indicated that it was difficult for participants to refute their most important myside reasons. The tendency for polarization accentuation after writing reasons was reconfirmed. However, the possibility that polarized thinking is suppressed in students with reduced qualitative myside bias was indicated, and methods for reducing qualitative myside bias were discussed.
Pervasive misinformation is a primary social issue in the digital age. A common method for resolving this issue is making corrections to mitigate false beliefs due to misinformation. However, the influence of misinformation is often predominant, thereby resulting in correction having a limited effect on alleviating people's false memory and reasoning. This psychological phenomenon is known as the continued influence effect of misinformation. Rapidly evolving research has accumulated into a sizable literature explaining the psychological processes that cause this effect. This article seeks to clarify the psychological processes for exploring ways to harness the negative impact of misinformation on our minds. Specifically, we review cognitive models and factors related to the continued influence effect, as well as a potential side effect of correction. Moreover, we summarize practical recommendations for interventions based on psychological characteristics. Finally, we discuss future directions in psychology and how emerging interdisciplinary research contributes to controlling the harmful impact of misinformation on our society.