Accurate symptom of cancer patient in regular basis is highly concern to the medical service provider for clinical decision making such as adjustment of medication. Since patients have limitations to provide self-reported symptoms, we have investigated how mobile phone application can play the vital role to help the patients in this case. We have used facial images captured by smart phone to detect pain level accurately. In this pain detection process, existing algorithms and infrastructure are used for cancer patients to make cost low and user-friendly. The pain management solution is the first mobile-based study as far as we found today. The proposed algorithm has been used to classify faces, which is represented as a weighted combination of Eigenfaces. Here, angular distance, and support vector machines (SVMs) are used for the classification system. In this study, longitudinal data was collected for six months in Bangladesh. Again, cross-sectional pain images were collected from three different countries: Bangladesh, Nepal and the United States. In this study, we found that personalized model for pain assessment performs better for automatic pain assessment. We also got that the training set should contain varying levels of pain in each group: low, medium and high.
The core technology of eScience is the connection of distributed resources such that they appear as one virtual instance, enabling the collaborative research work on the Internet on that shared virtualized resource. eScience first embraced shared resources through support of “big science” in the Grid computing field. By contrast, emerging cloud services make high-end eScience infrastructure such as shared computing and disk resources affordable to common researchers. Though there are many such services to choose between, users always have to be authenticated as a researcher and authorized when they utilize services provided by a given collaboration. Effectively leveraging the world-wide deployment of academic identity federations may allow us to build a complete and coherent eScience environment more securely, easily, and scalably. One marked recent tendency in Identity Federation is to support virtual organization (VO), organizations composed of individuals principally domiciled at and authenticating against an organization but acting in a particular role within the virtual organization. A similar theme also emerged in Grid computing. However, all known current schemes have no common method for sharing VO information because every virtual organization is, today, largely bespoke, and these custom-built implementations take into account the principal needs of the federation and country and project where the VO emerged, leading each federation to employ different standards in integration with VO's and provision of information to VO's. This paper offers a historical perspective on VO technology, first by assessing its evolution in the Grid computing field followed by an analysis of progress in broader identity federation. Finally, potential evolutionary paths are divided into three natural categories, and we perform a technical and operational comparison of current VO technology and its capacity to meet these new use cases today and in envisioned futures. Reflecting how much development and operational costs are acceptable for each party concerned, two of these are considered preferred choices for the short-term and long-term transition to the unified, global VO platform.
In event-driven programming we can react to an event by binding methods to it as handlers, but such a handler binding in current event systems is explicit and requires explicit reason about the graph of event propagation even for straightforward cases. On the other hand, the handler binding in reactive programming is implicit and constructed through signals. Recent approaches to support either event-driven programming or reactive programming show the need of using both the two styles in a program. We propose an extension to expand event systems to support reactive programming by enabling the automation of handler bindings. With such an extension programmers can use events to cover both the implicit style in reactive programming and the explicit style in event-driven programming. We first describe the essentials of reactive programming, signals and signal assignments, in terms of events, handlers, and bindings, then point out the lack of automation in existing event systems. Unlike most research activities we expand event systems to support signals rather than port signals to event systems. In this paper we also show a prototype implementation and translation examples to evaluate the concept of automation. Furthermore, the comparison with the predicate pointcuts in aspect-oriented programming and the details of the experimental compiler are discussed.
Pub/Sub communication model becomes a basis of various applications, e.g., IoT/M2M, SNS. These application domains require new properties of the Pub/Sub infrastructure, for example, supporting a large number of devices in a widely distributed manner. In order to meet the demands, we proposed Scalable Pub/Sub System using OpenFlow Controller, which we call SDN Aware Pub/Sub (SAPS). SAPS utilizes the both Application Layer Multicast (ALM) and OpenFlow based multicast (OFM). A simulation was done for evaluating the hybrid architecture in traffic and transmission delay reduction. The result shows that in the tree topology, even if it has only 100 subscribers, OFM can reduce inter-cluster traffic 71.6% with 16 clusters compared to ALM-LA. It can also reduce maximum inter-cluster hops 87.5%. On the other hand, with only 100 subscribers, almost all of switches are involved in the OFM tree construction and consume flow table space for the topic. It indicates that our hybrid approach is effective in Pub/Sub optimization considering the resource limitation of OpenFlow switches.
Delay and disruption tolerant networks (DTNs) adopt the store-carry-and-forward paradigm. Each node stores messages in a buffer storage and waits for either an appropriate forwarding opportunity or the message's expiration time, i.e., its time-to-live (TTL). There are two key issues that influence the performance of DTN routing: the forwarding policy that determines whether a message should be forwarded to an encountered node, and the buffer management policy that determines which message should be sent from the queue (i.e., message scheduling) and which message should be dropped when the buffer storage is full. This paper proposes a DTN routing protocol, called spray-and hop-distance-based with remaining-TTL consideration (SNHD-TTL) which integrates three features: (1) binary spray; (2) hop-distance-based forwarding; and (3) node location dependent remaining-TTL message scheduling. The aim is to better deliver messages which are highly congested especially in the “island scenario.” We evaluate it by simulation-based comparison with other popular protocols, namely Epidemic as a baseline and PRoPHETv2 that performs well according to our previous study. Our simulation results show that SNHD-TTL is able to outperform other routing protocols, significantly reduce overhead, and at the same time, increase the total size of delivered messages.
Scale-free structure is one of the most notable properties of the Internet as a complex network. Many researchers have investigated the end-to-end performance (e.g., throughput, packet loss probability, and round-trip time between source/destination nodes) of TCP congestion control mechanisms, but the impact of the scale-free structure on the TCP performance has not been fully understood. In this paper, we analyze the TCP performance on a scale-free tree whose strength of the scale-free property can be adjusted by a parameter. A scale-free tree represents the communication kernel for investigating a scale-free network since TCP mainly transmits packets on a shortest path between TCP source/destination nodes, and most shortest paths are included in the scale-free tree. Our numerical results show that the scale-free structure of a network improves the TCP performance, and that such performance improvement is caused by a reduction in the average path length and also a reduction of the traffic intensity at the bottleneck link. Furthermore, we confirm the validity of our analysis through a comparison with an optimization-based analysis.
In Infrastructure-as-a-Service (IaaS) clouds, users manage the systems in virtual machines (VMs) called user VMs through remote management systems (RMSes). To allow users to manage their VMs during failures inside the VMs, IaaS usually provides out-of-band remote management. This management is performed indirectly via an RMS server in a privileged VM called the management VM. However, it is discontinued when a user VM is migrated. This is because an RMS server in the management VM at the source host is terminated on VM migration. Even worse, pending data is lost between an RMS client and a user VM. In this paper, we propose D-MORE for continuing out-of-band remote management across VM migration. D-MORE provides a privileged and migratable VM called DomR and performs out-of-band remote management of a user VM via DomR. During VM migration, it synchronously co-migrates DomR and its target VM and transparently maintains the connections between an RMS client, DomR, and its target VM. We have implemented D-MORE in Xen and confirmed that a remote user could manage his VM via DomR after the VM has been migrated. Our experiments showed that input data was not lost during VM migration and the overhead of D-MORE was acceptable.
In modern cryptography, the secret sharing scheme is an important cryptographic primitive, and it is used in various situations. In this paper, timed-release secret sharing (TR-SS) schemes with information-theoretic security is first studied. TR-SS is a secret sharing scheme with the property that more than a threshold number of participants can reconstruct a secret by using their shares only when the time specified by a dealer has come. Specifically, in this paper we first introduce models and formalization of security for two kinds of TR-SS based on the traditional secret sharing scheme and information-theoretic timed-release security. We also derive tight lower bounds on the sizes of shares, time-signals, and entities' secret-keys required for each TR-SS scheme. In addition, we propose direct constructions for the TR-SS schemes. Each direct construction is optimal in the sense that the construction meets equality in each of our bounds, respectively. As a result, it is shown that timed-release security can be realized without any additional redundancy on the share size.
The theory of general relativity predicts that the strong gravity of a black hole bends the trajectories of light rays. Calculating their bendings numerically, we can obtain a 3D CG image when the view point is set in the black hole spacetime. The existing researches adopt the ray tracing method for rendering while we adopt the rasterization method in this paper. In order to achieve fast perspective projection in the curved spacetime, we calculate more than thirty million light trajectories on an optimally constructed computational mesh in advance and let a GPU interpolate them when rendering. Furthermore, in order to render the lines and triangular polygons of CG objects accurately, we apply the dynamic subdividing technique (tessellation). Various types of CG programs can be easily written in the same way as in the conventional 3D CG programming with a common graphics API. Utilizing the recent computing power of the GPU, the rendering performance of nearly one million polygons per second is achieved even on a notebook PC.
We present a development support tool, called LogChamber, which infers source-code locations by analyzing run-time logs of mobile applications. During development, developers insert log functions into applications calls in order to confirm that the applications correctly run as expected. After that, they need to have a process for estimating a program's runtime behavior in order to identify the locations of unintended behavior. Such processes rely on the abilities of the developers and are not easy in many cases. Most runtime environments of mobile applications provide only limited resources, and as a result, cannot save sufficiently many runtime logs. The situation is made even worse by careless insertions of log-function calls. The method presented in this paper analyzes static source code and runtime logs. After that, it supports developers by quickly inferring candidates of log function calls. For fast inference of candidates, it extracts log strings from the source code and constructs an index of their locations in advance. We implemented our method as a plugin tool on Android Studio, one of the major integrated development environments for Android applications. We report our experiments with the implementation on real open-source applications.
Werner's set-theoretical model is one of the simplest models of CCω. It combines a functional view of predicative universes with a collapsed view of the impredicative sort Prop. However this model of Prop is so coarse that the principle of excluded middle P ∨ ¬ P holds. In this paper, we interpret Prop into a topological space (a special case of Heyting algebra) to make it more intuitionistic without sacrificing simplicity. We prove soundness and show some applications of our model.
Although long queries are still a small part of the queries submitted to Web search engines, their usage tends to gradually increase. However, the effectiveness of the retrieval decreases with the increase of query length. Long queries are very likely to have few Web pages returned. We target at sentential queries, a type of long queries, and propose a method called sentential query paraphrasing for improving their retrieval performance, especially on recall. We are motivated by the assumption that a sentence is an indivisible whole, which means that removing terms or phrases from a sentence would lead to the missing of some information or query drift. In this paper, we paraphrase sentential queries to avoid missing information and consequently ensure the completeness of the information. Take the sentential query “apples pop a powerful pectin punch, ” for example. Its meaning will be changed if one or more terms are removed, and few Web pages are returned by conventional search engines. In contrast, querying by its paraphrases, such as “apples contain a lot of pectin” or “apples are rich in pectin, ” can retrieve more Web pages. The experimental results show that our method can acquire more paraphrases from the noisy Web. Besides, with the help of paraphrases, more Web pages can be retrieved, especially for those sentential queries that could not find any answers with its original expression.
Informatics or Computer Science is important subject in nowadays school education. Informatics can be presented as a discipline for understanding technology in a deeper way - the understanding behind computer programs. Bringing Informatics to schools means preparing young people to be creators of information technology - not only to be users of technological devices. To achieve that, we need to introduce Informatics concepts to primary, basic (K-9) and secondary education (K-12). From the other side, we need to help people to solve problems by using technology and developing computational thinking in various areas. The paper presents a short overview of Informatics education at schools of Lithuania with focus on future modern Informatics and Information Technology curriculum for K-12 education. The importance of informal education of Informatics concepts and computational thinking through contests is discussed as well. Few examples of short tasks for understanding Informatics concepts and developing computational thinking skills are presented.
The use of data mining in the education sector has increased in the recent past. One reason for this is the wide use of learning management systems (LMS), which store data related to learning activities. The goal of this research is to predict individual learning styles using the Moodle LMS by analyzing log data using a data mining technique. We use the Waikato environment for knowledge analysis (Weka), as the data mining tool and compare the differences in the performance of several data mining techniques using course log data. Our experimental results show that the J48 decision tree classification algorithm works best with our dataset. We also propose a group learning map that visualizes the learning styles in a class, which can help instructors and learners achieve learning outcomes more effectively.
August 05, 2016 Due to the maintenance of online payment system‚ article purchase with credit card will be unavailable as following schedule. If you may encounter the maintenance difficulties‚ please try again after the maintenance is completed. Thanks for your kind cooperation. Details
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.