As the usual provision of products and services with AI technology of explosive leap, in order to solve the problem of time and cost, and the problem of engineer shortage, many small and medium enterprises are using API services of other companies' AI technology. In addition, based on the shift to a media medium due to normalization of smartphones, etc., and based on "work style reforms" required for today's companies, this pager will introduce system examples to be implemented to efficiently search files using AI technology.
Recently, workload of white-collar job is getting larger and larger and it became one of the major social problems in Japan. To solve the problem, improvement of business-operation efficiency in each worker is necessary. As a tool for improving the efficiency, Robotic Process Automation (RPA) technology is emerging. However, current RPA tools are aimed at back office operations but not at white-color's various business operations in each worker. In this report, a personal-assistance RPA (PA-RPA) which assists input operation in business applications is proposed. Our rapid prototyping demonstrates that the proposed PARPA can reduce operating time by 57% when it's applied to settlement of travel expenses through in-house business application in our company
Recently, various temporal data have been collected in many fields, and visual analytics interface is expected to be useful for utilizing such data. However, when temporal data is visualized using animation, collision would occur between movement of time series data itself and movement caused by interaction with users. This paper focuses on trajectory, which can handle temporal and spatial changes uniformly, and proposes a visual analytics interface based on it. This paper also shows a use case of applying prototype interface to time series data.
For the purpose of developing a dialogue system to dialogue after visually understanding the surrounding situation. We developed Japanese Caption generation system Deep Watcher and image datasets with captions. We used the Show and Tell model using CNN and LSTM to generate captions. We also evaluated the coincidence rate of caption content and five feature items manually. As a result the coincidence rate of the contents of the generated caption was 41.6%, the highest characteristic item was gender and was 86.9%. The coincidence rate of the caption contents were not high by over learning, but we could show the possibility of application to the dialog system for the feature item of gender.