Social media allows people to post widely and evaluate diverse information including ideas, news and opinions. Once such an online item is posted on a social media site, it can be appreciated and shared by many people and become popular. This kind of phenomenon can have a large influence on people’s daily life and social trends. Thus, studies on modeling the arrival process of shares to an individual item have recently attracted a great deal of interest in the field of social media mining. In this paper, we propose, by combining a Dirichlet process with a Hawkes process in a novel way, a probabilistic model, called cooperative Hawkes process (CHP) model, to discover the cooperative structure among all the items involved. The proposed model takes into account all the arrival processes of shares for those items. We develop an efficient method of inferring the CHP model from the observed sequences of share-events, and present an effective framework for predicting the future popularity of each of these items. Using synthetic and real data, we demonstrate that the CHP model outperforms the Hawkes process model without interaction among items (HP model) and the multivariate Hawkes process model (MHP model) in terms of popularity prediction. Moreover, for real data from a cooking-recipe sharing site, we discover the cooperative structure among cooking-recipes in view of popularity dynamics by applying the CHP model.
In this paper, we analyzed individuals’ perceptions of the roles of intelligent machines and systems based on the results of a questionnaire survey conducted in 2015 and 2016 on artificial intelligence, robots, and other “intelligent machines and systems.” In conclusion, 1) human labor was not strongly supported for so-called public goods and services such as disaster prevention and military jobs, rather than leaving these tasks to machines. 2) In support of existing research, child-rearing was seen as women’s responsibility as in the present situation, considering that women and children benefit from the parenting experience. In addition, university graduates with high flexibility for social option and creative capacity are perceived to benefit from human-powered jobs, especially in tasks that require human responsibility or where the cost of implementing intelligent machines and systems would be expensive. Also, some tasks, such as music, are viewed as preferentially human tasks. 3) After taking into consideration endogenous in the regression, such as for child rearing and nursing care, overall, individuals perceived that childcare should not be left to machines, although nursing care was considered to possibly benefit from the use of machines.
Recent progress of EdTech raise attention on the research of the Knowledge Tracing(KT), which tries to estimate the knowledge-level of students by learning log data. On this context, the method so called Deep Knowledge Tracing (DKT), which leverage deep neural network to estimate students’ knowledge-level, shows remarkable performances; however, most of existing KT methods, including DKT, requires skill tags that shows what skill is required to solve a question, and it harms the applicability of KT to real world data, which often not organized by skill tags. In this paper, we extend the DKT model to generate pseudo-skill tags inside the model by enforcing the weight matrix of first layer could be regarded as translation matrix from questions to skills. This paper empirically validate the efficacy of the proposed extension using public two datasets, ASSISTments 2009-2010 and Bridge to Algebra 2006-2007. The results shows our extension gives almost similar or slightly higher performance compared with original DKT model without requiring any pre-defined skill tags. We also analyze the property of generated pseudo-skill tags with statistics and network analysis, and found that they have more hierarchical and information-efficient structure than the pre-defined skill tags.
Error detection in ocean data is difficult because characteristics of the ocean data are different among ocean areas. For now, the accurate error detection depends on visual checks by ocean data technicians. However, human resources are limited and their skills are not uniform, which makes it difficult to deliver accurate and uniformly quality-controlled ocean data. In this work, we propose a framework for an automated error detection in the ocean data, that is applicable for unknown types of errors, considering spatial autocorrelation. Our proposal framework consists of a training data selecting phase to take the spatial autocorrelation into consideration and an error detection phase. As a result of empirical experiments, we found the effective combinations of features, training data selecting methods and anomaly detection methods, regarding the ocean characteristics. In addition, our proposal training data selecting method worked efficiently, even when the number of training data was few around test data.
In the field of sustainability science and environmental studies researchers and stakeholders in various fields deal with common problem areas. It is necessary to establish the method to facilitate designing the framework for problem solving towards sustainable society through explicating and sharing mutual relationships between pieces of knowledge. This paper focuses on supporting the construction of causal logics between pieces of knowledge in the field of sustainability science and environmental studies, and develops the tool fulfilling such requirements and specifications. We first discuss the method for assessing logical consistency between a goal and an issue or a solution. Second, we design the specification of the supporting tool for exploring and extracting causal logics and pieces of knowledge based on ontology engineering. Third, we attempt to represent relationships between instances in sustainability science and environmental studies through class concepts structured by an ontology by means of the supporting tool. Fourth, we examine how differences of concepts focused on each of academic domains change the way to recognize and understand problem areas through the visualization by means of the supporting tool. Fifth, we assess the relationships between goals and issues or solutions by generating and representing causal logics actually. Finally, through this experiment process we discuss the specifications to apply this tool to these fields. The developed tool is published on http//:www.hozo.jp and the used ontology can be checked onhttp://env-ss.hozo.jp/.
To introduce the renewable energy in regional communities, it is necessary to select a sustainable energy mix on the basis of evaluation from multiple viewpoints including complex environmental impacts. The purpose of this study is to develop a tool for multi-objective optimization and evaluation of renewable energy composition in municipalities considering multiple environmental criteria. This tool was developed by improving Renewable Energy Regional Optimization Utility Tool for Environmental Sustainability, REROUTES. The adjustable variables are the amount of deployed renewable energy resources from solar, wind, small and medium-scale hydro, geothermal and biomass energy. NSGA-II, a kind of genetic algorithms was applied and implemented to REROUTES to solve multiobjective optimization with six objective functions (proportion of developed renewable energy, economic balance, decrease in CO2 emissions, circulation rate of biomass resource, impacted ecosystem area, and diversity index). A case study for two municipalities showed that the developed tool successfully calculated pareto solutions having trade-off with reflecting the natural conditions and varying demand structures of case study areas. In addition, a process of selecting one best solution from the pareto solutions on the basis of local opinions could be demonstrated. In conclusion, this study could develop an useful tool to support decision-making regarding the development of renewable energy resources.
Globally-covered ocean monitoring system Argo with more than 3,700 autonomous floats has been working, and its accumulated big ocean observation data helps many studies such as investigation into climate change mechanism. Since the observed data sometimes involves errors, human experts must visually confirm and revise quality control (QC) flags. However, such manual QC by human experts cannot be performed in some countries. In addition, it is difficult to regularize the quality of the ocean observation data of all over the world because the manual QC depends on human experts’ heuristics. Therefore, this paper proposes a method for error detection in Argo observation data using Conditional Random Field (CRF) because the problem requires consideration of sequence of both features and quality flags for accurate labeling in each depth. This paper also proposes a feature function design method using decision tree learning, allowing coping with various types of observation errors without manual work, whereas previous work had to focus on certain error types due to manual labor for feature function design. Furthermore, the proposed method divides the two CRF-based sequential classifiers that use manually- or automatically-designed feature functions respectively rather than combining the both feature functions into a single set. Experimental results have shown that the proposed method could detect all types of salinity errors with higher accuracy of QC flags assignments than the actually operated system in Argo project. In particular, the recall rate of the proposed method was better than that of CRF using the manually designed feature functions even for the specific error types for which they were designed.