Nowadays, Open Source Software (OSS) is often used by individuals/organizations, and many industrial systems with OSS are developed. In order to lead to use/develop safe and secure OSS, this paper overviews a research field called Open Source Software Engineering, through introducing existing academic studies.
Recently, riding a bicycle as sport attracts a great deal of attentions. Previous researches suggest it is ideal that pedaling in high frequency and keeping pedal rotational speed constant to perform fully. It is difficult for beginners to acquire such skill because pedaling skill of expert cyclists is developed based on long-term training. We propose a bicycle pedaling training system using auditory feedback. The system makes a feedback sound every time a pedal crank turns quarter rotation. A user can keep pedal rotation speed constant by synchronizing the feedback sounds with background music that tempo is constant. We evaluate the approach while 4 weeks, and evaluation results confirmed that the variance of subjects trained by using the system decreased significantly comparing subjects trained in conventional method.
In this paper we propose a system using a see-through HMD to support those who are so called “KOMYUSHO (i.e. people who tend to avoid communication)”, especially with symptoms of having difficulty of making eye contact with other people. We introduce the knowledge of Social Welfare to design the system with four functions in order to alleviate and improve the symptoms on communication: Occluding the conversation partner's face using a face detection technology and a visual effect on the user's visual field, instructing the user to improve the habit of diverting the gaze, giving a diagnosis based on the quantitative data of the gaze behavior, and giving the user means to escape from the situation when s/he can not cope with the communication. We implemented a prototype system and evaluated it to show the effectiveness of its functions and obtain feedback for further improvements in the future.
Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. To facilitate such exploration, we first present a new technique to analyze a parameter space to obtain a distribution of human preference. Our technique uses crowdsourced human computation to collect data for analysis. As a result of this analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two user interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We applied our technique to four applications with different design parameter spaces.
When users use large mobile devices, which are equipped with a large touchscreen, with their single hand, the region distant from their thumb is too distant for users to control. Then, users have to change their hand posture to control the top half, which is a complex action causing an unstable grip. To solve this problem, we present TouchOver, a single-handed stable control technique using a thumb for large mobile devices. This technique solves the problem by transferring touch events on the bottom half of a touchscreen to the upper half. Using TouchOver for the top half and direct touch for the bottom half allows users to control a large mobile device stably, that is a control for all region of a touchscreen only using their thumb without changing the hand posture. We made a widget to make the Android smartphone to detect double tapping its home button, to provide a trigger to activate/deactivate TouchOver. We present the use cases and user studies for measuring the performance of TouchOver.
We propose a video annotation system called “AnnoTone”, which supports video-editing process such as cropping and effects generation, by embedding annotations describing contextual information of a scene, such as geo-location of the video camera and quality of performance of actors, during a recording. The system converts inputted annotation data into high-frequency audio signals, which are almost inaudible to the human ear, and transmits them from a smartphone speaker placed near a video camera. After recording, embedded annotations are extracted from video files and exploited to support video-editing. The signals are not completely inaudible to the human ear, but we confirmed that they can be removed from video files without considerable quality loss, using audio filters. We also tested the reliability of signal embedding and the durability of annotation signals against audio conversions by experiments, and showed the feasibility of the proposed technique in practical situations. We present several example applications using AnnoTone, and discuss the possibility of novel video-editing techniques realized by annotation embedding.
Dragging and dropping narrow and long targets, such as resizing windows by moving their edges, requires accurate manipulation and takes a long time. To facilitate such tasks, we propose a new technique called Cross-drag. We developed a system to utilize this technique in a realistic GUI environment. Moreover, we discussed the performance of Cross-drag by comparing it with other pointing methods used to drag-and-drop. The experiment to evaluate our technique was conducted by dragging-and-dropping narrow targets with various width/height/gap sizes. The operation time and error rate results showed that Cross-drag outperformed the traditional pointing-based drag-and-drop under many conditions, thereby illustrating the effectiveness of Cross-drag.
In graph-based model checking, systems are modeled with graph structures which are highly expressive and feature a symmetry reduction mechanism. However, it involves frequent isomorphism checking of graphs generated in the course of model checking. Canonical labeling of graphs, which gives a unique representation to isomorphic graphs, is expected to be an effective method to check isomorphism among many graphs efficiently. It is therefore important to efficiently compute canonical forms of graphs in graph rewriting systems. For this purpose, we propose an optimization technique for McKay's canonical labeling algorithm that reuses information of graph structures that does not change by rewriting. To enable reuse, we reformulated McKay's algorithm to clarify what substructures of graphs are utilized in its execution, and designed an algorithm for successive graph canonicalization that organizes graph information in such a way that recomputation may be minimized. We evaluated the performance of the proposed algorithm, and found that it achieved sublinear time complexity with respect to the number of vertices for many cases of re-canonicalization.
Nowadays, there are many available Web applications on the Internet. Since anyone can access a Web application with a browser, its user interface(WUI) becomes very important. Developers normally base the WUI on a template, but since designing it from scratch has a high cost, a pre-defined template is often used. However, such pre-defined template may not always be what the developer wants. We thus propose an approach where the developer will extract templates from existing Web pages. We confirmed the usefulness of our method through an evaluation focusing on correctness and development time.
We present the notion of concurrent context-oriented programming and its implementation method. To realize context-oriented programming in concurrent systems based on asynchronous communication such as the Actor model, one must take special care to control synchronizations among context changes and other computations. Our method uses reflection to solve the synchronization problem regarding messages that cross two contexts. In this paper, we give some preliminary evaluation results using an implementation in Erlang.
August 28, 2017 There had been a service stop from Aug 28‚ 2017‚ 1:50 to Aug 28‚ 2017‚ 10:08(JST) (Aug 27‚ 2017‚ 16:50 to Aug 28‚ 2017‚ 1:08(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
July 31, 2017 Due to the end of the Yahoo!JAPAN OpenID service, My J-STAGE will end the support of the following sign-in services with OpenID on August 26, 2017: -Sign-in with Yahoo!JAPAN ID -Sign-in with livedoor ID * After that, please sign-in with My J-STAGE ID.
July 03, 2017 There had been a service stop from Jul 2‚ 2017‚ 8:06 to Jul 2‚ 2017‚ 19:12(JST) (Jul 1‚ 2017‚ 23:06 to Jul 2‚ 2017‚ 10:12(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.