This paper describes a computer system for solving pre-university mathematical problems written in Japanese. At the core of its language processing mechanism is a formal grammar, with which a semantic representation is derived from a sentence. The development process of the grammar is explained in detail. Finally, several observations are provided regarding the current research field of artificial intelligence and language processing.
Deep learning models have already achieved significant results and surpassed other sophisticated models based on features made by experts in various tasks; image processing, sound processing, and natural language processing. They are considered to automatically learn and extract concepts such as “cat face” and “human body” from given big dataset. However, what are indeed called concepts, and how they are extracted? This manuscript provides a rough history of neural networks preceding deep learning, explanation of concepts learned by deep learning models, and then future perspective of deep learning study with (cognitive) neuroscience.
Recent arguments in philosophy of science concerning artificial intelligence seem to heavily concentrate on social or ethical issues, such as ‘Singularity' problem or harmonious coexistence with AI. But the meaningful relationship between philosophy of science and AI is not limited to that of this kind. The Bayesian network (BN) is one of the central issues in the research of AI, and this has a lot to do with traditional arguments in philosophy of science, since finding the single best definition of causality has been one of main themes in philosophy of science. In this paper, I will consider possible ways for philosophers to be deeply in touch with AI regarding ‘the methodology of BN', ‘the definition of causation', and ‘the elimination of causation'.
A new way of interpreting or approaching Wittgenstein’s remarks on following rules in Philosophical Investigations will be introduced. The notion of “family resemblance” will be claimed to play a central role in Wittgenstein’s views on what our concepts are, and therefore on what it is to employ them. By way of illustrating his views on concepts, I will appeal to certain models of concept and classification from psychology and machine learning. Wittgenstein’s fundamental remarks on following rules will be presented as natural consequences of his views on the nature of our concepts.
More than half a century ago, Noam Chomsky advanced the nativist hypothesis that our syntactic competence is innate. His hypothesis has received a number of objections from the philosophers and psychologists oriented to empiricism. The debate is still ongoing. This paper aims to elucidate the structure of this debate between nativism and empiricism. First, we delineate the meaning of “innateness” relevant in this debate. Then we articulate four arguments for nativism and the objections from empiricists against each of them. The four arguments are Poverty of Stimulus Argument, the argument from linguistic universals, the argument from convergence, and the argument from critical period effects.