In recent years, we have witnessed an unprecedented rise in localized severe weather phenomena, such as tornadoes and heavy rain, that are not predictable by conventional weather forecasting systems. In spite of this, there are few observation posts for forecasting tornadoes and heavy rain. It is necessary to srastically increase such observation points in order to make accurate predictions using real data. We have developed a compact and low-cost pressure information acquisition system to detect signs of localized abnormal weather. This research proposes an algorithm to predict local weather by detecting anomalous pressure values in the time series of the pressure sensor information and to notify users of impending dangerous weather conditions.
The performance of different types of weighted citation networks for detecting emerging research fronts was investigated by a comparative study in the existing work. The citation networks are constructed and then divided into clusters to detect the research front. Additionally, some measures to weighted citations like difference in publication years between citing and cited papers and similarities of keywords between them, which are expected to be able to effectively detect emerging research fronts, were applied. However, the functions of deciding the edge's weight in the citation networks are decided based on the experiments. For deciding the effective weight's functions automatically depending on the characteristics of the dataset, a learning method is important. In this paper, we propose the novel learning method based on the Neural Networks for deciding the edge's weights for the citation networks. We have been evaluating our proposed method in three research domains including Gallium nitride, Complex Networks, and Nano-carbon. We demonstrate that our proposed method has the best performance of each approach by using the following measures of extracted research fronts: visibility, speed, and topological and field relevance than the existing methods.
On a retrieval of Linked Open Data using SPARQL, it is important to consider an execution cost of query, especially when the query utilizes inference capability on the endpoint. A query often causes unpredictable and unwanted consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. To prevent such an execution of time-consuming queries, approximating the original query could be a good option to reduce loads of endpoints. In this paper, we present an idea and its conceptual model on building endpoints having a mechanism to automatically reduce unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into a more speed optimized one by applying a GA-based query rewriting approach. Our analysis shows a potential benefit on preventing unexpectedly long inference computations and keeping a low variance of inference-enabled query executions by applying our query rewriting approach. We also present a prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side, as well as rewriting such time-consuming queries by applying our approach.
Over the past few years, convolutional neural networks (CNN) have set the state of the art in a wide variety of supervised computer vision problems. Most research effort has focused on single-label classification, due to the availability of the large scale ImageNet dataset. Via pre-training on this dataset, CNNs have also shown the ability to outperform traditional methods for multi-label classification. Such methods, however, typically require evaluating many expensive forward passes to produce a multi-label distribution. Furthermore, due to the lack of a large scale multi-label dataset, little effort has been invested into training CNNs from scratch with multi-label data. In this paper, we address both issues by introducing a multi-label cost function adequate for deep CNNs, and a prediction method requiring only a single forward pass to produce multi-label predictions. We show the performance of our method on a newly introduced large scale multi-label dataset of animation images. Here, our method reaches 75.1% precision and 66.5% accuracy, making it suitable for automated annotation in practice. Additionally, we apply our method to the Pascal VOC 2007 dataset of natural images, and show that our prediction method outperforms a comparable model for a fraction of the computational cost.
A multi-sensor-based ambient sensing system is proposed for estimating the user's comfort/discomfort in response to the lighting condition during desk work. The user's comfort/discomfort is estimated according to facial expression, body sway, writing motion and frequency of drinking measured by sensors embedded in the environment. The recognition rate of the user's comfort/discomfort under the lighting condition that induces different feelings of comfort depending on the user's state of the day is evaluated in an experimental environment. As a result, the recognition rate of the user's comfort/discomfort on a two-point scale is 91% when selecting a suitable combination of ambient sensors. Furthermore, it is suggested that not only information of facial expression but also the information of body sway, writing motion and frequency of drinking is useful for the estimation of comfort/discomfort.
Understanding individual students more deeply in the class is the most vital role in educational situations. Using comment data written by students after each lesson helps in the understanding of their learning attitudes and situations. They can be a powerful source of data for all forms of assessment. The PCN method categorizes the comments into three items: P (Previous learning activity), C (Current learning activity), and N (Next learning activity plan). The objective of this paper is to investigate how the three time-series items: P, C, and N, and the difficulty of a subject affect the prediction results of final student grades using two types of machine learning techniques: Support Vector Machine (SVM) and Artificial Neural Network (ANN). The experiment results indicate that the students described their current activities (C-comment) in more detail than previous and next activities (P- and N-comments); this tendency is reflected in prediction accuracy and F-measure of their grades.
Currently, a lot of attractive e-services which can be used for retrieving useful information from the Internet have been available. They have different usage manner, so that supporting system for low information technology literacy users is highly expected. The important issue for such a supporting system is extensibility because the system should be followed by new attractive e-services according to user's preference with minimum revision cost. Therefore, this paper proposes “An Application Framework for Trend Surfing System: TSS Framework” that focuses on an extensible application framework for trend surfing. Trend surfing means that some kinds of related trendy information (multi-aspect) retrieved from different e-services can be displayed on different screen devices (multi-screen) according to user's intuitive operation (multimodal user interface). The proposed the TSS Framework derived from the Model-View-Controller (MVC) pattern is based on the system architecture which enables double mashup not only e-services but also screen devices in front of users. It is implemented in a two-phase information retrieval model which enables multi-aspect trend surfing. We show the TSS Framework through an implementation example. We also demonstrate the effectiveness of the TSS Framework by indicating system modification case study.
Stochastic processes play an important role in gene regulatory networks. For many years, methods and algorithms have been developed to solve the problems regarding stochastic mechanisms in the cellular reaction system. Discrete Chemical Master Equation (dCME) is a method developed to analyze biological networks by computing the exact probability distribution of the microstates. With this method, because all computations and analyses of probability distribution can be processed based on the enumerated microstates, network microstates enumeration has been considered as a significant and prerequisite step. However, there is no efficient enumeration method. Applications will perform poorly when enumeration must address a complex or large network. To improve these microstate computation and analysis methods, we propose an efficient algorithm to enumerate microstates using Matrix Network, a new data structure we designed. Unlike traditional methods that perform the enumeration using simulation to apply reactions, the proposed approach utilizes the correlation of the microstate values and the geometric structure of the microstate map to accelerate the enumeration computation. In this paper, the theoretical basis, features and algorithms of Matrix Network are discussed. Moreover, sample applications demonstrating computation and analysis using Matrix Network are provided.
CAPTCHAs distinguish humans from automated programs by presenting questions that are easy for humans but difficult for computers, e.g., recognition of visual characters or audio utterances. The state of the art research suggests that the security of visual and audio CAPTCHAs mainly lies in anti-segmentation techniques, because individual symbol recognition after segmentation can be solved with a high success rate with certain machine learning algorithms. Thus, most recent commercial CAPTCHAs present continuous symbols to prevent automated segmentation. We propose a novel framework that can automatically decode continuous CAPTCHAs and assess its effectiveness with actual CAPTCHA questions from Google's reCAPTCHA. Our framework is constructed on the basis of a sequence recognition method based on hidden Markov models (HMMs), which can be concisely implemented by using an off-the-shelf library HMM toolkit. This method concatenates several HMMs, each of which recognizes a symbol, to build a larger HMM that recognizes a question. Our experimental results reveal vulnerabilities in continuous CAPTCHAs because the solver cracks the visual and audio reCAPTCHA systems with 31.75% and 58.75% accuracy, respectively. We further propose guidelines to prevent possible attacking from HMM-based CAPTCHA solvers on the basis of synthetic experiments with simulated continuous CAPTCHAs.
A refinement type can be used to express a detailed specification of a higher-order functional program. Given a refinement type as a specification of a program, we can verify that the program satisfies the specification by checking that the program has the refinement type. Refinement type checking/inference has been extensively studied and a number of refinement type checkers have been implemented. Most of the existing refinement type checkers, however, need type annotations, which is a heavy burden on users. To overcome this problem, we reduce a refinement type checking problem to an assertion checking problem, which asks whether the assertions in a program never fail; and then we use an existing assertion checker to solve it. This reduction enables us to construct a fully automated refinement type checker by using a state-of-the-art fully automated assertion checker. We also prove the soundness and the completeness of the reduction, and report on implementation and preliminary experiments.
In this paper, we demonstrate a food recognition method by monitoring power leakage from a domestic microwave oven. Universal Software Radio Peripheral (USRP) is applied as a low-cost spectrum analyzer to measure the microwave oven leakage as received signal strength indication (RSSI). We aim to recognize 18 categories of food that are commonly cooked in a microwave oven. By analyzing 180 features that contain the information of heating-time difference, we attain an average recognition accuracy of 82.3%. Using 138 features excluding the heating-time difference information, the average recognition accuracy is 56.2%. The recognition accuracy under different conditions is also investigated, for instance, utilizing different microwave ovens, different distances between the microwave oven and the USRP as well as different data down-sampling rates. Finally, a food recognition application is implemented to demonstrate our method.
July 31, 2017 Due to the end of the Yahoo!JAPAN OpenID service, My J-STAGE will end the support of the following sign-in services with OpenID on August 26, 2017: -Sign-in with Yahoo!JAPAN ID -Sign-in with livedoor ID * After that, please sign-in with My J-STAGE ID.
July 03, 2017 There had been a service stop from Jul 2‚ 2017‚ 8:06 to Jul 2‚ 2017‚ 19:12(JST) (Jul 1‚ 2017‚ 23:06 to Jul 2‚ 2017‚ 10:12(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.