Simulation and Organizational Studies in Japan

: Since the 1990s, simulation and organizational studies have been conducted in Japan. In this paper, we review the simulation and organizational studies in Japan, including the relationships between researchers. The global trend is to cite the results of simulation studies as metaphors. By contrast, in Japan, there are unique research groups which critically examine the existing models, perform simulations, and further test them against survey data. The lessons they learned are: (a) The animation of the simulation results stir the imagination of researchers and business persons. However, (b) if the phenomena indicated by simulation and the reality of parameter values are not supported by the survey data, the implications derived from simulation are no more than a delusion.


Introduction
The development of the first all-purpose digital electronic computer, ENIAC began in 1943 during World War II with the goal of calculating the trajectory of new artilleries being developed and introduced one after another (Takahashi, 2011, 2013a. The ENIAC was not completed in time for the end of the war, however before the completion ceremony in February 1946, the first calculation conducted to test ENIAC was for a plane wave in exploding a hydrogen bomb which was under development at the time at Los Alamos National Laboratory (Takahashi, 2011, 2013a. In each case, all can be calculated with a computer instead of actually testing with shooting shells or exploding a hydrogen bomb, therefore a computer was originally created as a machine for simulation. For management applications, the MIT group ran a simulation model incorporating a positive/negative feedback loop on a computer and called it Industrial Dynamics (Forrester, 1961). Around the same time, the Carnegie Institute of Technology (renamed Carnegie Mellon University in 1965) performed a similar simulation of corporate behavior (Cyert & March, 1963). In both cases, corporate behavior was represented by a mathematical formula, which was calculated with time. As a result, simulation of corporate behavior was represented with a curve line or a polygonal line by plotting time on the horizonal axis and the focused variable on the vertical axis.
DYNAMO, the compiler dedicated to Industrial Dynamics, was used for urban problems, earth resources problems, etc., and it was called System Dynamics (SD). The results of SD analysis commissioned by the Club of Rome to the MIT project team were published as The limits to growth (Meadows, Meadows, Randers, & Behrens, 1972) which attracted worldwide attention. However, the causal relationship of the world model (e.g., Meadows et al., 1972, pp. 102-103, Figure 26) was too complicated, serving as no more than a blackbox. After all, SD had hardly penetrated until the end of the 1980s in Japan (Miyakawa & Kobayashi, 1988), as in the simulations of Cyert and March (1963). 1 However, simulation and organizational studies began in the 1990s in Japan. Inamizu (2013) reviewed the simulation studies conducted in the United States including the relationships between researchers. In this paper, we review the simulation studies in Japan using the same method. Unlike the global trend, which cites only the results of simulation studies as metaphor, there are unique research groups in Japan which critically examine the existing models, perform simulations, and further test them against survey data.

Garbage Can Model
The garbage can model of Cohen, March, and Olsen (1972), published in the same year as The limits to growth, suggested the existence of a seemingly insane but interesting phenomenon "decision making by flight" in simulation. Does it really exist? The author surveyed approximately 8,500 white-collar workers in a total of 40 Japanese companies every year for 10 years starting from 1991 (Takahashi, 1992) until 2000. As a result, it was found that 53% of the workers experienced the phenomenon (Takahashi, 2015, Chapter 4). However, in fact, studies that confirm such results of the garbage can model were rare worldwide.
On the contrary, few subsequent studies have actually tried simulation, although Cohen et al. (1972) were accompanied with a simulation program. Guido Fioretti has collected and published garbage can model programs on his website, 2 however there are only eight programs in total. The author's Single Garbage Can Model (Takahashi, 1997a) is also included, however Takahashi (1997a) follows Cohen et al. (1972) in chronological order, which was 25 years later.
No one pointed it out because no one seriously tried it again, 3 however the program of Cohen et al. (1972) was flawed. Actually, Inamizu (2015) clearly indicates the following three points: (1) their program failed to detect the three decision making styles (decision making by resolution/by oversight/by flight); (2) decision making sometimes occurred not only when choice opportunities had no problems but also when a choice opportunity had no decision maker; and (3) even when there were no problems attached to choice opportunities, the initial setting was programmed as though there were, perhaps to avoid the decision making seen in (2). In other words, the garbage can model has never been properly considered and has only been referred to as a metaphor (e.g., Lynn, 1982).

The Parameter to Evolve Cooperation
The most influential simulation study in political science is Robert Axelrod's The evolution of cooperation (Axelrod, 1984). He conducted two types of simulation described below based on a novel idea in which the programs play the repeated game of the prisoner's dilemma.
A) He conducted the first round of the tournament, in other words, a round-robin league match between computer programs of 14 professional game theorists (Axelrod, 1980a). On the basis of the feedback on an analysis of the first round results, he conducted the second round of the tournament that was based on the programs of 62 players, including players from the first round (Axelrod, 1980b). 4 B) Then, he conducted a simulation of the evolution of 1000 generations by the rule that the share of a given program in a round i + 1 will be proportional to the program's tournament score in the previous round i based on the result of the second round of the tournament (Axelrod, 1980b).
The results were compiled in Axelrod (1984) with an appealing message that humans will cooperate even if they are free as long as the future parameter defined as the probability of playing the next move (Takahashi, 2013c) is sufficiently high. It turned the Hobbes-like view of the world of "the war of all against all" upside down.
Below is a narrative approach describing how simulation has been received and developed among researchers in Japan mainly based on the author's experience. In 1991, Tetsuo Kondo and the author returned to the University of Tokyo as associate professors. Kondo received his Ph.D. from the University of Chicago and published Kondo (1990). It was him who introduced us to Axelrod's research. Kondo was making students write programs in undergraduate courses, and practiced computer tournaments in Axelrod's type A). However, he died suddenly in 1994 at the age of 34. Takashi Shimizu (currently a professor at the University of Tokyo) took one of Kondo's classes, and then entered my undergraduate seminar course in 1994 at the recommendation of Kondo. Shimizu published Shimizu (1997), which developed the simulation of type B) during master's program, and also wrote a paper in honor of Kondo which was published in Takahashi (1996b).
In the short time when the author was a colleague of Kondo, the bubble burst in the Japanese economy after land prices peaked between 1991 and 1992 (Takahashi, 2017). In that period of economic stagnation, the author developed a perspective index in 1992 and started measurement. The finding was that the perspective index was surprisingly powerful in explaining the job satisfaction ratio, turnover candidate ratio and other variables. At that time, after hearing about Kondo's research, the thought came that the perspective index was a type of future parameter (Takahashi, 1996a(Takahashi, , 1997b(Takahashi, , 2013c. Since then, the perspective index survey was conducted continuously, its character was confirmed as a future parameter (Takahashi, 1998a(Takahashi, , 2014(Takahashi, , 2018a(Takahashi, , 2018b(Takahashi, , 2019Takahashi, Ohkawa, & Inamizu, 2009, 2014aTakahashi, Ohkawa, Inamizu, & Akiike, 2013).

Simulation Videos Spark Your Imagination
Before his death, Kondo introduced the author to his undergraduate and graduate senior Shota Hattori. Kozo Keikaku Engineering Inc. (KKE), led by Hattori, would become a big promoter to spread multi-agent simulation, also known as agent-based simulation, in Japan. Multi-agent simulation attracted attention in the 1990s complexity boom (e.g., Epstein & Axtell, 1996). In this complexity boom, Axelrod (1997) and Axelrod and Cohen 5 (1999) even called themselves complexity.
In the multi-agent simulation, multiple agents behave in the space on the computer based on each rule. Even if each agent's rule itself is simple, the sum total of the behaviors of individual agents will result in complex movements that could not be predicted.
General-purpose calculation software such as Mathematica or a 5 An author of Cohen et al. (1972). dedicated simulator such as Swarm of the Santa Fe Institute is used for multi-agent simulation. KKE developed a dedicated simulator. The prototype was called KK-MAS and was released in the spring of 2006 as artisoc (compound term formed from artificial societies). In the process, a number of researchers became lead users of artisoc. Prof. Susumu Yamakage's group of the University of Tokyo was the center of the lead users, and his group's efforts produced results in a series of studies (e.g., Yamakage & Hattori, 2002) and textbook (Yamakage, 2007). 6 Being one of the lead users, in July 1999, the author attended the  (Watts & Widdowson, 1999, 2000, and then designed a simulation model called "communication competition model." Consequently, we wrote a Discussion Paper (Takahashi, Kuwashima, & Tamada, 2000) which was later published in 6 artisoc can also be used for various models in management. For example, Hideki Fujita (currently an associate professor at Toyo University), who was a classmate of Shimizu at seminars the author held at the University of Tokyo, conducts simulation of achievement motivation (Fujita, 2000(Fujita, , 2009 Tamada (2004, 2005), and presented a paper "Communication competition model" (Kuwashima, Takahashi, & Tamada, 2000)  Almost a week later, we made a presentation with the same title at the KKE Customers Conference 2000 (Takahashi & Kuwashima, 2000). This time, we cut off the presentation early, and showed the animation of the multi-agent simulation using the projector during half of the time. Then, researchers from the floor raised hands one after another and asked questions, which was completely different from the time of the annual meeting. We recognized that even an such simple animation which "•" moves around has the power to evoke people's imagination.
In 2001, one undergraduate student came to my seminar course after reading our Discussion Paper (Takahashi et al., 2000) and said he wanted to do simulation. His name was Nobuyuki Inamizu (currently an associate professor at the University of Tokyo). He frequently visited the Yamakage lab and became a co-author of Chapter 2 of Yamakage and Hattori (2002) during his undergraduate years.
One of studies by Inamizu, who went to graduate school, was to reproduce a garbage can model as a multi-agent simulation (Inamizu, 2006

Self-enforced Equilibrium is a World of Death
In addition, the animation reminded us of something Therefore, when we switched the program to the "right" rational model, the screen stopped right away. It was not a bug. It becomes self-enforcing and falls into equilibrium in a very short time. The still screen was like a dead world. An equilibrium is a dead world which is in contrast to the amoeba animation of a wandering model. In addition, the rational model surprisingly had significantly lower long-term performance (Takahashi, Kuwashima, & Tamada, 2006). In theory, the wandering model should have clearly lower performance in the short term than the rational model. This is because even if the amount of effective ideas at the current position were highest, the agent will abandon the current position and will move to another position. However, the rational model had significantly lower long-term performance than the wandering model. Then, it may be doubtful that equilibrium is the desired state, as economists believe. The wandering model which is not in equilibrium forever may be better.
The property of the wandering model which "moves to a movable position other than the current position" is consistent with the concept of "propensity to change." This concept was originally used in Effective Temperature Hypothesis proposed by Takahashi (1989) to explain the lukewarm feeling of Japanese companies. The propensity to change is defined as the propensity neither to accept the present situation nor to spend an easygoing time and to challenge the status quo (Takahashi, 2013b). Effective Temperature Hypothesis means that lukewarm feeling can be explained by a difference between the body temperature, propensity to change of an organization member, and system temperature, propensity to change as the system. Effective Temperature Hypothesis is supported by data based on tens of thousands of people (Takahashi, 1993(Takahashi, , 1997b(Takahashi, , 2013bTakahashi, Ohkawa, & Inamizu, 2009, 2014b. There is a phenomenon that dare to challenge rather than equilibrium leads to higher performance in the long term, and propensity to change can be measured.

Realistic Value of Parameter?
In 1989, a conference on organizational learning was held at Carnegie Mellon University to commemorate James March. There are 10 papers published in the special issue on organizational learning of Organization Science (Vol. 2, No. 1, 1991) followed by four papers in a later issue (Vol. 3, No. 1, 1992). Cohen and Sproull (1996) was published, adding nine papers to these 14 papers. Among them, March (1991) contrasts exploration with exploitation, and also became famous when Levinthal and March (1993) later called myopia of learning a phenomenon in which exploitation was prioritized over exploration. March (1991), which lists the exploration and exploitation of the current trend (O'Reilly & Tushman, 2016) in the title of the paper, was actually a paper that developed and analyzed two simulation models.
However, both simulations had problems in the domain settings. For example, in the latter half, that is the competitive ecology model, the three lines of March (1991), Figure 6 with N = 2, N = 10, and N = 100 clearly cross the vertical axis at about 0.2, about 0.8, and about 1.7, respectively. However, in practice, these lines can be obtained analytically in mathematics without performing simulation, and in fact, these were tangent parabolas that touch the vertical axis at 0.44, 1.34, and 2.33 respectively (Takahashi, 1998b, Figure 6).
The first half, the mutual learning model, is also suspicious (Takahashi, 1998b). Figure 1 of March (1991) looks like a graph of a monotonically decreasing function, but in fact its curve starts from the socialization rate p 1 = 0.1, and the curve to the left is missing. If the missing part is supplemented, as shown in Figure 3 of Mitomi and Takahashi (2015), there is actually a peak in the missing part where the socialization rate p 1 is almost 0.06 to 0.07. "Slower socialization (lower p 1 ) leads to greater knowledge at equilibrium than does faster socialization" (March, 1991, p. 75) is clearly a mistake. The conclusion that "slow learning on the part of individuals maintains diversity longer, thereby providing the exploration that allows the knowledge found in the organizational code to improve" (March, 1991, p. 76) cannot be arrived at using this model. Actually, there is an optimal socialization rate that maximizes the average knowledge level (Mitomi & Takahashi, 2015, pp. 45-46).
The biggest problem is whether the socialization rate 0.06 to 0.07 in the simulation model can be a low value that can never occur or whether the socialization rate is a common value. If the values of the parameters set in the simulation model are not supported by the survey data as being realistic, the predictions derived therefrom are merely delusions that never occur. In other words, it is impossible to argue with simulation alone.

Although the Computer Has Advanced
The history of simulation is closely related to the performance improvement of computers. SD was a product of the era of the mainframe computer. The program of the garbage can model in Cohen et al. (1972) was written in FORTRAN, which was also a standard high-level programming language used in mainframes. At that time, in order to save computer resources to share, a program written in high-level programming language (source code in a modern sense) was translated into machine language at once with a compiler, transformed into a load module (a binary code in the modern sense) and reused it. The first compiler was developed for FORTRAN in 1957. That is why the compiler for SD was called DYNAMO. However, when the processing speed has improved, the interpreter that executes the source code while interpreting it sequentially was no longer a practical problem, and the presence of the compiler diminished.
Mainframes were only available at universities and large corporations. Moreover, in order to use them at universities, the budget for computer use was first required to be obtained. The budget was limited and, for example, the author was working as a computer instructor to get a separate budget for the part-time job even after becoming a research associate. Those who can run a large program were limited to some privileged researchers.
In the 1980s, PCs began to become popular, and researchers at universities were finally released from mainframe computer budget constraints. Afterwards, BASIC, which is a simplified version of FORTRAN, was established as a high-level programming language for PCs. BASIC was an interpreter, not a compiler. Axelrod (1984)'s computer tournament was in transition, and the programs were written in FORTRAN or BASIC (Axelrod, 1984, p. 43). Takahashi (1997a) calculated on a PC, so the Single Garbage Can Model was written in BASIC.
In the late 1990s, the performance of PCs improved and simulation became possible with spreadsheets. Shimizu (1997)  In Japan, due to the improvement of computer performance, additional tests and retests of old models that were once strictly executed in terms of performance have been performed. However, there were few such research done in places other than Japan. This was a major problem. In other words, it was the source of uniqueness on the part of researchers that, on a global or journal basis, they could only be run with state-of-the-art computers. When the performance of a computer improves and anyone can simulate, its uniqueness is lost. Model and simulation experts pursued another uniqueness: create complicated models that can be understood only by peers, and with self-praise or by strictly imposing standards and methodologies (Davis, Eisenhardt, & Bingham, 2007;Harrison, Lin, Carroll, & Carley, 2007), trying to academically eliminate interesting simulations of half amateurs. This cycle has repeatedly led to a number of declines in simulation research.

Conclusion
To describe in a rather extreme manner, if the formula of a program has a systemic meaning, the calculation is simulation. If you perform simulation, you can make rough predictions even for problems that are difficult to handle analytically in mathematics or problems that you do not intuitively imagine. However, simulation must not be formed as an academic form by itself. Just as computing is no longer the monopoly of some researchers, simulation must not be proprietary to some researchers: