Miniaturization is one of the keys to realizing ubiquitous and wearable devices with greatly improved practicality. The barriers to further miniaturization include energy sources and the systems for user input/output. Completely removing direct physical interaction with the user and conducting all interaction through intermediary devices is an easy approach to further device miniaturization. Direct interaction, however, provides an alternative, redundant, and sometimes quite intuitive means of device control, and so study of its implementation is warranted. This paper explores how to integrate energy supplies with input systems while ensuring miniaturization. The proposed technique detects the deliberately sequenced interruption of the energy supply, forced by the user, as a command. It is implemented on prototypes with a battery and a PV (photovoltaic) module, and finally, applied to active tags as a typical example. The technique is practical and promising for further device miniaturization.
This paper describes a user-installable indoor positioning system based on a new wireless beaconing method called a Wi-Fi location beacon, and a pedestrian dead reckoning (PDR) module. In the proposed system, the new wireless beaconing method offers an accurate positioning system using a high-gain beam antenna in the 5GHz Wi-Fi band. The antenna forms a narrow hotspot of received signal strength (RSSI) in the space immediately below itself; the user devices detect the hotspot by monitoring the RSSI. The beaconing module is also resistant to jamming by other wireless systems and noise. Experimental results show that the proposed system can work in a high density Wi-Fi environment with over 100 Wi-Fi stations, and the positioning error of the proposed system at CDF =90% is about 3.5m. The positioning accuracy of the proposed system is slightly inferior to previous systems based on the Wi-Fi fingerprint method, but it achieves a similar function using a user-installable system which is easier to install and maintain than previous systems.
In this paper, we 1) provide a real nursing data set for mobile activity recognition that can be used for supervised machine learning, 2) provide big data combined with the patient medical records and sensors attempted for 2 years, and also 3) propose a method for recognizing activities for a whole day utilizing prior knowledge about the activity segments in a day. Furthermore, we demonstrate data mining by applying our method to the bigger data with additional hospital data. In the proposed method, we 1) convert a set of segment timestamps into a prior probability of the activity segment by exploiting the concept of importance sampling, 2) obtain the likelihood of traditional recognition methods for each local time window within the segment range, and, 3) apply Bayesian estimation by marginalizing the conditional probability of estimating the activities for the segment samples. By evaluating with the dataset, the proposed method outperformed the traditional method without using the prior knowledge by 25.81% at maximum by a balanced classification rate, and outperformed by 6.5% the F-measure with accepting 1 hour of margin. Moreover, the proposed method significantly reduces duration errors of activity segments from 324.2 seconds of the traditional method to 74.6 seconds at maximum. We also demonstrate the data mining by applying our method to bigger data in a hospital.
An array database is effective for managing a massive amount of sensor data, and the window aggregate is a popular operator. We propose an efficient window aggregate method over multi-dimensional array data based on incremental computation. We improve five types of aggregates by exploiting different data structures: list for summation and average, heap for maximum and minimum, and balanced binary search tree for percentile. We design and fully implement the proposed method in SciDB using the plugin mechanism. In addition, we evaluate the performance through experiments using the synthetic and JRA-55 meteorological datasets. The results of our experiments on SciDB are consistent with our analytic findings. The proposed method achieves a 17.9x, 12.5x, and 10.2x performance improvement for minimum, summation, and percentile operators, respectively, compared with SciDB built-in operators. These results align with our time-complexity analysis results.
An adaptive middleware system for ubiquitous computing environments, which are dynamic by nature, is proposed. The system introduces the relocation of software components to define functions between computers as a basic mechanism for adaptation on ubiquitous computing. It also defines a language for specifying adaptation policies. Since the language is defined on a theoretical foundation, it enables us to reason about and predict adaptations beforehand. It is also useful to detect conflicts or divergences that may be caused by the adaptations. It supports general-purpose software components and the relocation of the components according to policies described in the language. We describe the design and implementation of the system and present two practical applications.
In our previous work, we proposed an autonomous decentralized control mechanism (ADCM) to be able to control the probability distribution of a system performance variable on the Markov chain Monte Carlo method. As interesting features of our ADCM, we proved the probability distribution controlled by our ADCM is described by a few macro parameters, and discovered a law between the macro parameters and the external environment of the system in our ADCM. In this paper, on the basis of the law, we design an autonomous decentralized adaptive function to adapt to change in the external environment. This function ensures the robustness of our ADCM against changing environment. We apply our ADCM with the proposed adaptive function to a virtual machine placement problem in data center networks (DCNs). Simulation experiments confirm that the proposed adaptive function effectively deals with several DCN scenarios with changing environment.
Non-photorealistic rendering (NPR) creates images with artistic styles of paintings. In this field, a number of methods of converting photographed images into non-photorealistic ones have been developed, and can be categorized into filter-based and exemplar-based approaches. In this paper, we focus on the exemplar-based approach and propose a novel method which transfers a style of a reference pictorial image to a photographed image. Specifically, we first input a pair of target and reference images. The target image is converted by minimizing an energy function which is defined based on the difference in intensities between an output image and a target image, and the pattern dissimilarity between an output image and a reference image. The proposed method transfers structures and colors of textures in the reference image and generates continuous textures by minimizing the energy function. In experiments, we demonstrate the effectiveness of the proposed method using a variety of images and examine the influence of parameter changes and intensity adjustment for pre-processing on resultant images.
When manufacturing or 3D-printing a product using a computer, a program that procedurally controls manufacturing machines or 3D printers is required. G-code is widely used for this purpose. G-code was developed for controlling subtractive manufacturing (cutting work), and designers have historically written programs in G-code, but, in recently developed environments, the designer describes a declarative model by using computer-aided design (CAD), and the computer converts it to a G-code program. However, because the process of additive manufacturing, of which FDM-type 3D-printing is a prominent example, is more intuitive than subtractive manufacturing, it is sometimes advantageous for the designer to describe an abstract procedural program for this purpose. This paper therefore proposes a method for generating G-code by describing a Python program using a library for procedural 3D design and for printing by a 3D printer, and it presents use cases. Although shapes printable by the method are restricted, this method can eliminate layers and layer seams as well as support, which is necessary for conventional methods when an overhang exists, and it enables seamless and aesthetic printing.
Logic puzzles such as Sudoku are described by a set of properties that a valid solution must have. Constraints are a useful technique to describe and solve for such properties. However, constraints are less suited to express imperative interactions in a user interface for logic puzzles, a domain that is more readily expressed in the object-oriented paradigm. Object constraint programming provides a design to integrate constraints with dynamic, object-oriented programming languages. It allows developers to encode multi-way constraints over objects using existing, object-oriented abstractions. These constraints are automatically maintained at run-time. In this paper we present an application of this design to logic puzzles in the Squeak/Smalltalk programming environment, as well as an extension of the design and the formal semantics of Babelsberg to allow declaring constraints using the imperative collection API provided in Squeak. We argue that our implementation facilitates creating applications that use imperative construction of user interfaces and mutable program state as well as constraint satisfaction techniques for different parts of the system. The main advantage of our approach is that it moves the burden to maintain constraints from the developer to the runtime environment, while keeping the development experience close to the purely object-oriented approach.
Graphs play an important role today in managing big data. Supporting declarative graph queries is one of the most crucial parts for efficiently manipulating graph databases. Structural recursion has been studied for graph querying and graph transformations. However, most of the previous studies about graph structural recursion do not exploit in practical the power of parallel computing. The bulk semantics, which is used for parallel evaluation of structural recursion, still impose many constraints that limit the performance of querying in parallel. In this paper, we propose a framework that systematically generates structural recursive functions from high-level declarative graph queries, then the generated functions are evaluated efficiently on our framework on top of the Pregel model. Therefore, the complexity in developing efficient structural recursive functions is relaxed by our solution.
Given a large collection of time-evolving online user activities, such as Google Search queries for multiple keywords of various categories (celebrities, events, diseases, etc...), which consist of d keywords/activities, for l countries/locations of duration n, how can we find patterns and rules? For example, assume that we have the online search volume for “Harry Potter”, “Barack Obama” and “Amazon”, for 232 countries/territories, from 2004 to 2015, which include external shocks, sudden change of search volume, and more. How do we go about capturing non-linear evolutions of local activities and forecasting future patterns? Our goal is to analyze a large collection of time-evolving sequences, and moreover, to find the answer for the following issues: (a) Are there any important external shocks/events relating to the keywords in the sequences? (b) If there are, can we automatically detect them? (c) Are there any countries/territories which have different reacts to the global trend? In this paper, we present Δ-SPOT, a unifying analytical non-linear model for large scale web search data; as well as an efficient and effective fitting algorithm, which solves the above problems. Δ-SPOT can also forecast long-range future dynamics of the keywords/queries. Extensive experiments on real data show that our method outperforms other effective methods of non-linear mining in terms of accuracy in both fitting and forecasting.
We present a method for selecting good locations, each of which is close to desirable facilities such as stations, warehouses, promising customers' house, etc. and is far from undesirable facilities such as competitors' shops, noise sources, etc. Skyline query, which selects non-dominated objects, is a well known method for selecting small number of desirable objects. We use the idea of skyline queries to select good locations. However, locations are two dimensional data, while objects in the problem of conventional skyline queries are zero dimensional data. Comparison of two dimensional data is much more complicated than that of zero dimensional data. In this paper, we solve the problem of skyline query for two dimensional data, i.e., areas in a map. Experimental evaluations of the proposed method shows that our approach is able to find reasonable number of desirable skyline areas and can help users to find good locations.