In recent years, there has been a global effort to promote Open Science (OS), with increased demands for the publication of research papers and research data from researchers. To encourage researchers to participate in OS, there is a need for new research evaluation methods that recognize OS activities as valuable contributions. In Europe, OpenAIRE, and in the United Kingdom, CORE, have started offering dashboards for universities and research institutions. These dashboards aggregate a wide range of publicly available research outputs and support research evaluation with transparency and fairness based on OS monitoring indicators. In this study, we focused on OpenAIRE and CORE, as well as InCites and SciVal, which have been used for research evaluation in universities and other institutions. First, we examined the provided indicators based on publicly available materials and categorized them into 'Provision,' 'Funding,' 'Outputs,' 'OS,' 'Collaboration,' 'Citations,' 'Impact,' and 'Other'. Next, we compared the number of indicators provided in each category. As a result, we found that multiple dashboards offered sets of indicators across various classification categories, indicating commonalities among the dashboards. Furthermore, in OpenAIRE and CORE, OS monitoring indicators complying with FAIR principles, such as persistent identifier assignment rates, were observed, highlighting differences from InCites and SciVal. Additionally, OpenAIRE is in the process of preparing research evaluation indicators related to citations and collaborations, suggesting the potential for increased commonalities in the future.
As an effective use of ChatGPT in research activities, the authors proposed a method to prepare research data from a set of relatively short sentences. The titles of 239 articles published in four journals were used as a case study. Prompts were given to ChatGPT to infer the content from each title, and the obtained responses were organized. Therefore, the authors reduced the effect of erroneous judgments by giving the same prompt ten times to ChatGPT and summing up the answers. As a result, when comparing two data sets generated by the same method, 97% of the data sets showed the same results, indicating high data reproducibility. However, depending on the slight difference in the wording of the prompt given to ChatGPT, the response obtained could also change. We need additional thought on the optimal wording of the prompts to get a correct response from ChatGPT.
Based on the experience of developing and implementing online courses at Syracuse University, the National Institute of Multimedia Education (NIME), and the Open University of Japan, I will discuss trends and future directions in online education at Japanese universities under digital transformation (DX). After looking at the history of online education at domestic and overseas higher education institutions, and giving an overview of the impact of the introduction of online education at Japanese universities during the coronavirus pandemic, I will examine whether online education will take root in Japan's higher education institutions in the midst of DX.