In this paper, we analyze the historical process of emergence of proof assistants, and acceptance of proof assistants by the community of mathematicians. Our analysis is done by reflecting on how the notion of proof deepened through the proposal of formalism by Hilbert, formalization of the notion of computability, and especially development of type theory. In this analysis, we view mathematics as human linguistic activity where proofs are produced and communicated by means of both natural languages and formal languages. The complete acceptance of proof assistants by mathematicians is yet to be achieved, but we argue that it should and would happen by properly designing and implmenting a meta proof assistant which can talk and reason about any formal systems.
A standard interpretation of Bertrand Russell’s early work on logic revolves around the doctrine of the unrestricted variable—the idea that the genuine variable of logic must range over all the objects in the universe. Those who endorse this interpretation view the doctrine as ‘the centerpiece’ of The Principles of Mathematics. My aim in this essay is to examine some of the given and possible grounds for this view. I attempt to show that Russell in that book does not endorse the doctrine as it stands but the idea that there are no objects that cannot, in principle, be fully described—the idea that there is no logical bar to making simply true judgments about objects.
In the “Appendix B” to his Principles of Mathematics (1903), Bertrand Russell developed a theory of types that is different explicitly from his so-called “ramified” theory of types in Principia Mathematica (1910). It is not easy to evaluate properly this “Appendix B” theory of types, because (A) it is sometimes thought that it is only a rough sketch added hastily, and (B) it seems to play no role for Russellʼs later theoretical developments. But in this paper, I shall show that both (A) and (B) are not correct and that the “Appendix B” theory of types played an important role for his theoretical developments leading to the ramified theory of types.
At the end of the 19th century, the Peano School elaborated its famous theory of “definitions by abstraction”. Two decades later,Hermann Weyl elaborated a generalization of the former, termed “creative definitions”, capable of covering various cases of ideal elements (Peano’s abstracta being among them). If the Peano School proposal eventually appeared to be based on the nowadays standard classificatory process of quotienting a set by an equivalence, Weyl’s proposal still lacks a set-theoretical, classificatory interpretation. In this paper,we define and investigate the notion of relational indiscernibility (upon which Weyl’s creative definitions are based) and show that a bridge from the concept of indiscernibility to the notion of type (sets closed by bi-orthogonal) may be built from the observation that individuals are indiscernible exactly when they belong to exactly the same types. In the last part, we investigate some philosophical consequences of those observations concerning the theory of abstraction.
We discuss the equational representations of the elimination rule of inductive types, with a focus on the type “natural number”, in the context of the series of approaches to separating an equational calculus from logic. We go back to a source of the purely equational representation of the elimination rule, Wittgenstein's uniqueness rule. We analyze Wittgenstein's argument, in comparison with others', which gives supplementary remarks to Marion-Okada (2018).
The microscopes have been showing us the invisible entities since their invention. The magnified images with the optical microscope convinced us of their existence, such as blood capillary and the cell nuclei during cell divisions. And the electron microscope visualized viruses that people had doubted their existence. This paper explores the history of observations of dislocations in crystals with the microscopes from the 1940s to the 1960s to show how microscopists visualized the dislocations to verify the existence of dislocations. The visualizations of dislocations with the Transmission Electron microscope in 1956 had a critical role in the acceptance of the reality of dislocations. Also, this historical case would offer an opportunity to analyze the relationship between representations and existence.
This paper aims to put forward an alternative to the standard theory of action (STA), which I call “the teleological theory of action (TTA)”. I also examine the main argument for STA and maintain that there is a possibility to deny two premises of the argument. Each denial is called the disjunctivism of bodily movement and the disjunctivism of intention. TTA implies that an intention in action is (part of) a bodily movement, and this in turn implies the two disjunctivisms. TTA is supported by the causal dispositionalism which takes dispositions as basic and understands causation in terms of them.
Artificial intelligence research has made impressive progress in the last ten years with the development of new methodologies such as deep learning. There are several implications of the progress both for philosophy of cognitive science and philosophy of artificial intelligence, but none are conclusive. Though its success seems to support connectionism in cognitive science, there are several features of human cognition that remain to be explained. Also, though it is often said that deep learning is the key to build artificial general intelligence, deep neural networks we now have are specialized ones and it's not clear how we can build a general artificial intelligence from such specialized networks.
Haecceitism is the idea that each particular object has a haecceity: the property that determines its uniqueness as an object. Thus, for example, we could say that Socratesʼs haecceity is the property of being (identical with) Socrates. However, haecceitism seems to face the “Haecceitic Euthyphro Problem,” namely, that, especially in the case of fission of an amoeba, it is unclear how to set an explanatory order between the two facts: the destruction or generation of particular objects and the instantiation of their haecceities. In this paper, I distinguish between two versions of haecceitism and address this issue with the version I call “primitivist haecceitism.”
The issue of (in)compatibility between presentism and time travel has intrigued many philosophers for the last few decades. Keller and Nelson  have argued that, if presentism is a feasible theory of time that applies to ordinary (non-time travel) cases, then it should be compatible with time travel. Bigelow  and Sider , on the other hand, have independently argued that the idea of time travel contradicts the presentist conception of time because it involves the ʻspatialisation of timeʼ (in a metaphysical sense), which is something that presentists should resist. In support of the latter claim, I offer a new argument via a different route. More specifically, I clarify basic components of the view that I take as ʻorthodoxʼ presentism by examining how presentists have considered temporal notions of the existence of things and their property possession. It is because of these notions that presentists can sensibly maintain a dynamic theory of time and should not believe in time travel.
This article handles a series of articles written by Suzuki (2016a; 2016b; 2018), in which he put forward the anti-psychologism, a new standpoint of action theory. This standpoint, however, has lost the path Anscombe originally opened up, seemingly. Anscombe's point (called “Anscombe's motif” in this article) was: the agent him/herself responds, by revealing his/her original motive, to the question “Why did/do you do…?” Leaving this ground, in modern theories, one sees an unfamiliar idea like a normative reason flow into the debate on human action. We review this academic environment critically in terms of Kaneko (2017), whom Suzuki criticized in one of his papers (2018).
The world of perception has the structural feature which we name the foreground influence. The influence clarifies the relation between the damages of brains and the conditions of the patients. Furthermore, the foreground influence explains the relation between the condition of the non-damaged brain and the looks of the world of perception. The foreground influence, which is not causal relation, makes landscape appear directly without representation. The information processing in brain operates as the foreground influence like many kinds of glasses. We will be able to realize the naturalization of mind only after we can clarify the physical nature of information and the virtual dimension which information processing produces.
In this review essay, I examine the significance of Conceptual Engineering Manifesto edited by K. Todayama and K. Karasawa, which is the first book in Japan that explores a new field called “conceptual engineering” in collaboration with psychology. After presenting an overview of its aim, distinctive features, and contents, I make general comments on the possibility of the research program proposed in this book, mainly in the following respects: its feasibility as engineering; continuity, function, and nature of concepts. Through discussing, I also articulate further tasks for conceptual engineering to be addressed.
‘What is reality?’ is a Japanese book whose authors are a mathematician and a phenomenologist. The authors examine the general structure of nature as well as our behavior. In the present note, we review its content from the perspective of identity, which is an isomorphic natural transformation according to the authors.
This is a review essay on Daisuke Kachi's Agents: Contemporary Substance Ontology (sic. Shunjusha, 2018). The book develops and partially defends an ontology that takes the category of substance as the most fundamental one. The author provides in it a new perspective on substance, which consists in characterizing substances as bearers of what he calls “substance modalities” (of which there are four kinds, that stem from the factors of essence, power, past persistence, and future persistence respectively). The first part of this review essay gives an extended overview of Kachi's book while the second discusses some problems it may face.