Let G be a probabilistic graph, in which the vertices fail independently with known probabilities. Let K represent a specified subset of vertices. The K-terminal reliability of G is defined as the probability that all vertices in K are connected. When |K|=2, the K-terminal reliability is called the 2-terminal reliability, which is the probability that the source vertex is connected to the destination vertex. The problems of computing K-terminal reliability and 2-terminal reliability have been proven to be #P-complete in general. This work demonstrates that on multi-tolerance graphs, the 2-terminal reliability problem can be solved in polynomial-time and the results can be extended to the K-terminal reliability problem on bounded multi-tolerance graphs.
In lighting control systems, accurate data of artificial light (lighting coefficients) are essential for the illumination control accuracy and energy saving efficiency. This research proposes a novel Lambertian-Radial Basis Function Neural Network (L-RBFNN) to realize modeling of both lighting coefficients and the illumination environment for an office. By adding a Lambertian neuron to represent the rough theoretical illuminance distribution of the lamp and modifying RBF neurons to regulate the distribution shape, L-RBFNN successfully solves the instability problem of conventional RBFNN and achieves higher modeling accuracy. Simulations of both single-light modeling and multiple-light modeling are made and compared with other methods such as Lambertian function, cubic spline interpolation and conventional RBFNN. The results prove that: 1) L-RBFNN is a successful modeling method for artificial light with imperceptible modeling error; 2) Compared with other existing methods, L-RBFNN can provide better performance with lower modeling error; 3) The number of training sensors can be reduced to be the same with the number of lamps, thus making the modeling method easier to apply in real-world lighting systems.
In most existing centralized lighting control systems, the lighting control problem (LCP) is reformulated as a constrained minimization problem and solved by linear programming (LP). However, in real-world applications, LCP is actually discrete and non-linear, which means that more accurate algorithm may be applied to achieve improvements in energy saving. In this paper, particle swarm optimization (PSO) is successfully applied for office lighting control and a linear programming guided particle swarm optimization (LPPSO) algorithm is developed to achieve considerable energy saving while satisfying users' lighting preference. Simulations in DIALux office models (one with small number of lamps and one with large number of lamps) are made and analyzed using the proposed control algorithms. Comparison with other widely used methods including LP shows that LPPSO can always achieve higher energy saving than other lighting control methods.
The bug reports expressed in natural language text usually suffer from vast, ambiguous and poorly written, which causes the challenge to the duplicate bug reports detection. Current automatic duplicate bug reports detection techniques have mainly focused on textual information and ignored some useful factors. To improve the detection accuracy, in this paper, we propose a new approach calls LNG (LDA and N-gram) model which takes advantages of the topic model LDA and word-based model N-gram. The LNG considers multiple factors, including textual information, semantic correlation, word order, contextual connections, and categorial information, that potentially affect the detection accuracy. Besides, the N-gram adopted in our LNG model is improved by modifying the similarity algorithm. The experiment is conducted under more than 230,000 real bug reports of the Eclipse project. In the evaluation, we propose a new evaluation metric, namely exact-accuracy (EA) rate, which can be used to enhance the understanding of the performance of duplicates detection. The evaluation results show that all the recall rate, precision rate, and EA rate of the proposed method are higher than treating them separately. Also, the recall rate is improved by 2.96%-10.53% compared to the state-of-art approach DBTM.
The goal of software testing should go beyond simply finding defects. Ultimately, testing should be focused on increasing customer satisfaction. Defects that are detected in areas of the software that the customers are especially interested in can cause more customer dissatisfaction. If these defects accumulate, they can cause the software to be shunned in the marketplace. Therefore, it is important to focus on reducing defects in areas that customers consider valuable. This article proposes a value-driven V-model (V2 model) that deals with customer values and reflects them in the test design for increasing customer satisfaction and raising test efficiency.
Mapping instances to the Linked Open Data (LOD) cloud plays an important role for enriching information of instances, since the LOD cloud contains abundant amounts of interlinked instances describing the instances. Consequently, many techniques have been introduced for mapping instances to a LOD data set; however, most of them merely focus on tackling with the problem of heterogeneity. Unfortunately, the problem of the large number of LOD data sets has yet to be addressed. Owing to the number of LOD data sets, mapping an instance to a LOD data set is not sufficient because an identical instance might not exist in that data set. In this article, we therefore introduce a heuristic expansion based framework for mapping instances to LOD data sets. The key idea of the framework is to gradually expand the search space from one data set to another data set in order to discover identical instances. In experiments, the framework could successfully map instances to the LOD data sets by increasing the coverage to 90.36%. Experimental results also indicate that the heuristic function in the framework could efficiently limit the expansion space to a reasonable space. Based upon the limited expansion space, the framework could effectively reduce the number of candidate pairs to 9.73% of the baseline without affecting any performances.
SSDs consist of non-mechanical components (host interface, control core, DRAM, flash memory, etc.) whose integrated behavior is not well-known. This makes an SSD seem like a black-box to users. We analyzed power consumption of four SSDs with standard I/O operations. We find the following: (a) the power consumption of SSDs is not significantly lower than that of HDDs, (b) all SSDs we tested had similar power consumption patterns which, we assume, is a result of their internal parallelism. SSDs have a parallel architecture that connects flash memories by channel or by way. This parallel architecture improves performance of SSDs if the information is known to the file system. This paper proposes three SSD characterization algorithms to infer the characteristics of SSD, such as internal parallelism, I/O unit, and page allocation scheme, by measuring its power consumption with various sized workloads. These algorithms are applied to four real SSDs to find: (i) the internal parallelism to decide whether to perform I/Os in a concurrent or an interleaved manner, (ii) the I/O unit size that determines the maximum size that can be assigned to a flash memory, and (iii) a page allocation method to map the logical address of write operations, which are requested from the host to the physical address of flash memory. We developed a data sampling method to provide consistency in collecting power consumption patterns of each SSD. When we applied three algorithms to four real SSDs, we found flash memory configurations, I/O unit sizes, and page allocation schemes. We show that the performance of SSD can be improved by aligning the record size of file system with I/O unit of SSD, which we found by using our algorithm. We found that Q Pro has I/O unit of 32 KB, and by aligning the file system record size to 32 KB, the performance increased by 201% and energy consumption decreased by 85%, which compared to the record size of 4 KB.
With the increase of network components connected to the Internet, the need to ensure secure connectivity is becoming increasingly vital. Intrusion Detection Systems (IDSs) are one of the common security components that identify security violations. This paper proposes a novel multilevel hybrid classifier that uses different feature sets on each classifier. It presents the Discernibility Function based Feature Selection method and two classifiers involving multilayer perceptron (MLP) and decision tree (C4.5). Experiments are conducted on the KDD'99 Cup and ISCX datasets, and the proposal demonstrates better performance than individual classifiers and other proposed hybrid classifiers. The proposed method provides significant improvement in the detection rates of attack classes and Cost Per Example (CPE) which was the primary evaluation method in the KDD'99 Cup competition.
The reputation-based majority-voting approach is a promising solution for detecting malicious workers in a cloud system. However, this approach has a drawback in that it can detect malicious workers only when the number of colluders make up no more than half of all workers. In this paper, we simulate the behavior of a reputation-based method and mathematically analyze its accuracy. Through the analysis, we observe that, regardless of the number of colluders and their collusion probability, if the reputation value of a group is significantly different from those of other groups, it is a completely honest group. Based on the analysis result, we propose a new method for distinguishing honest workers from colluders even when the colluders make up the majority group. The proposed method constructs groups based on their reputations. A group with the significantly highest or lowest reputation value is considered a completely honest group. Otherwise, honest workers are mixed together with colluders in a group. The proposed method accurately identifies honest workers even in a mixed group by comparing each voting result one by one. The results of a security analysis and an experiment show that our method can identify honest workers much more accurately than a traditional reputation-based approach with little additional computational overhead.
The software birthmarking technique has conventionally been studied in fields such as software piracy, code theft, and copyright infringement. The most recent API-based software birthmarking method (Han et al., 2014) extracts API call sequences in entire code sections of a program. Additionally, it is generated as a birthmark using a cryptographic hash function (MD5). It was reported that different application types can be categorized in a program through pre-filtering based on DLL/API numbers/names. However, similarity cannot be measured owing to the cryptographic hash function, occurrence of false negatives, and it is difficult to functionally categorize applications using only DLL/API numbers/names. In this paper, we propose an API-based software birthmarking method using fuzzy hashing. For the native code of a program, our software birthmarking technique extracts API call sequences in the segmented procedures and then generates them using a fuzzy hash function. Unlike the conventional cryptographic hash function, the fuzzy hash is used for the similarity measurement of data. Our method using a fuzzy hash function achieved a high reduction ratio (about 41% on average) more than an original birthmark that is generated with only the API call sequences. In our experiments, when threshold ε is 0.35, the results show that our method is an effective birthmarking system to measure similarities of the software. Moreover, our correlation analysis with top 50 API call frequencies proves that it is difficult to functionally categorize applications using only DLL/API numbers/names. Compared to prior work, our method significantly improves the properties of resilience and credibility.
There have been many previous studies to facilitate the use of smartphones as remote controllers of PCs. Image-based user interfaces have been suggested to provide fully functioning remote applications. However, most previous image-based interfaces consume high battery power and network bandwidth. Also most users have specific preferences on various applications on remote PCs, but previous smartphone interface systems would not allow users to define their own smartphone interfaces to set their preferences. This paper presents a new smartphone user interface system, SmartUI, for remote PC control. SmartUI is designed as a text-oriented web-based interface, so that it can be used on any smartphone with a built-in web browser while saving battery power and network bandwidth. Moreover, SmartUI enables a user to create buttons on a smartphone; for a quick launch and for shortcut keys, associated with a specific remote PC application. As a result, SmartUI allows a user to create his or her own smartphone interface for remote PC control, while saving battery power and network bandwidth. SmartUI has been tested with various smartphones and the results are also presented in this paper.
An encountered-type haptic interface generates touch sensation only when a user's hand “encounters” virtual objects. This paper presents an effective encountered-type haptic interface that enables rendering of surfaces with variable curvature. The key idea is to systematically bend a thin elastic plate so as to create a curved surface with desired curvature, which becomes a contacting end effector that follows the user's finger and becomes an interface a user can touch when needed. The pose of the curvature is controlled in a way that it corresponds to the curved surfaces of virtual objects and user's finger position. The idea is realized by attaching two commercial haptic interfaces to both edges of a thin acryl plate and squeezing the plate. This setup allows us to generate a cylindrical object with curvature up to 0.035 mm-1 and gives 3DOF position control and 1DOF rotational control of the curved surface. Achievable workspace and curvature range are analyzed, and the feasibility and physical performance are demonstrated through a visuo-haptic grabbing scenario. In addition, a psychophysical experiment shows perceptual competence of the proposed system.
Most unsupervised video segmentation algorithms are difficult to handle object extraction in dynamic real-world scenes with large displacements, as foreground hypothesis is often initialized with no explicit mutual constraint on top-down spatio-temporal coherency despite that it may be imposed to the segmentation objective. To handle such situations, we propose a multiscale saliency flow (MSF) model that jointly learns both foreground and background features of multiscale salient evidences, hence allowing temporally coherent top-down information in one frame to be propagated throughout the remaining frames. In particular, the top-down evidences are detected by combining saliency signature within a certain range of higher scales of approximation coefficients in wavelet domain. Saliency flow is then estimated by Gaussian kernel correlation of non-maximal suppressed multiscale evidences, which are characterized by HOG descriptors in a high-dimensional feature space. We build the proposed MSF model in accordance with the primary object hypothesis that jointly integrates temporal consistent constraints of saliency map estimated at multiple scales into the objective. We demonstrate the effectiveness of the proposed multiscale saliency flow for segmenting dynamic real-world scenes with large displacements caused by uniform sampling of video sequences.
A vocoder-based speech synthesis system, named WORLD, was developed in an effort to improve the sound quality of real-time applications using speech. Speech analysis, manipulation, and synthesis on the basis of vocoders are used in various kinds of speech research. Although several high-quality speech synthesis systems have been developed, real-time processing has been difficult with them because of their high computational costs. This new speech synthesis system has not only sound quality but also quick processing. It consists of three analysis algorithms and one synthesis algorithm proposed in our previous research. The effectiveness of the system was evaluated by comparing its output with against natural speech including consonants. Its processing speed was also compared with those of conventional systems. The results showed that WORLD was superior to the other systems in terms of both sound quality and processing speed. In particular, it was over ten times faster than the conventional systems, and the real time factor (RTF) indicated that it was fast enough for real-time processing.
The development of assistive devices for automated sound recognition is an important field of research and has been receiving increased attention. However, there are still very few methods specifically developed for identifying environmental sounds. The majority of the existing approaches try to adapt speech recognition techniques for the task, usually incurring high computational complexity. This paper proposes a sound recognition method dedicated to environmental sounds, designed with its main focus on embedded applications. The pre-processing stage is loosely based on the human hearing system, while a robust set of binary features permits a simple k-NN classifier to be used. This gives the system the capability of in-field learning, by which new sounds can be simply added to the reference set in real-time, greatly improving its usability. The system was implemented in an FPGA based platform, developed in-house specifically for this application. The design of the proposed method took into consideration several restrictions imposed by the hardware, such as limited computing power and memory, and supports up to 12 reference sounds of around 5.3 s each. Experimental results were performed in a database of 29 sounds. Sensitivity and specificity were evaluated over several random subsets of these signals. The obtained values for sensitivity and specificity, without additional noise, were, respectively, 0.957 and 0.918. With the addition of +6 dB of pink noise, sensitivity and specificity were 0.822 and 0.942, respectively. The in-field learning strategy presented no significant change in sensitivity and a total decrease of 5.4% in specificity when progressively increasing the number of reference sounds from 1 to 9 under noisy conditions. The minimal signal-to-noise ration required by the prototype to correctly recognize sounds was between -8 dB and 3 dB. These results show that the proposed method and implementation have great potential for several real life applications.
Visual tracking has been studied for several decades but continues to draw significant attention because of its critical role in many applications. Recent years have seen greater interest in the use of correlation filters in visual tracking systems, owing to their extremely compelling results in different competitions and benchmarks. However, there is still a need to improve the overall tracking capability to counter various tracking issues, including large scale variation, occlusion, and deformation. This paper presents an appealing tracker with robust scale estimation, which can handle the problem of fixed template size in Kernelized Correlation Filter (KCF) tracker with no significant decrease in the speed. We apply the discriminative correlation filter for scale estimation as an independent part after finding the optimal translation based on the KCF tracker. Compared to an exhaustive scale space search scheme, our approach provides improved performance while being computationally efficient. In order to reveal the effectiveness of our approach, we use benchmark sequences annotated with 11 attributes to evaluate how well the tracker handles different attributes. Numerous experiments demonstrate that the proposed algorithm performs favorably against several state-of-the-art algorithms. Appealing results both in accuracy and robustness are also achieved on all 51 benchmark sequences, which proves the efficiency of our tracker.
In this paper, a multiple-object tracking approach in large-scale scene is proposed based on visual sensor network. Firstly, the object detection is carried out by extracting the HOG features. Then, object tracking is performed based on an improved particle filter method. On the one hand, a kind of temporal and spatial dynamic model is designed to improve the tracking precision. On the other hand, the cumulative error generated from evaluating particles is eliminated through an appearance model. In addition, losses of the tracking will be incurred for several reasons, such as occlusion, scene switching and leaving. When the object is in the scene under monitoring by visual sensor network again, object tracking will continue through object re-identification. Finally, continuous multiple-object tracking in large-scale scene is implemented. A database is established by collecting data through the visual sensor network. Then the performances of object tracking and object re-identification are tested. The effectiveness of the proposed multiple-object tracking approach is verified.
A novel rendering algorithm with a best-matching patch is proposed to address the noise artifacts associated with Monte Carlo renderings. First, in the sampling stage, the representative patch is selected through a modified patch shift procedure, which gathers homogeneous pixels together to stay clear of the edges. Second, each pixel is filtered over a discrete set of filters, where the range kernel is computed using the selected patches. The difference between the selected patch and the filtered value is used as the pixel error, and the single filter that returns the smallest estimated error is chosen. In the reconstruction stage, pixel colors are combined with features of depth, normal and texture to form a cross bilateral filter, which highly preserves scene details while effectively removing noise. Finally, a heuristic metric is calculated to allocate additional samples in difficult regions. Compared with state-of-the art methods, the proposed algorithm performs better both in visual image quality and numerical error.
In this paper we propose a novel classification method for the multiple k-nearest neighbor (MkNN) classifier and show its practical application to medical image processing. The proposed method performs fine classification when a pair of the spatial coordinate of the observation data in the observation space and its corresponding feature vector in the feature space is provided. The proposed MkNN classifier uses the continuity of the distribution of features of the same class not only in the feature space but also in the observation space. In order to validate the performance of the present method, it is applied to the tissue characterization problem of coronary plaque. The quantitative and qualitative validity of the proposed MkNN classifier have been confirmed by actual experiments.
In this letter, we propose a novel garbage collection technique for index structures based on flash memory systems, called Proxy Block-based Garbage Collection (PBGC). Many index structures have been proposed for flash memory systems. They exploit buffers and logs to resolve the update propagation problem, one of the a main cause of performance degradation of the index structures. However, these studies overlooked the fact that not only the record operation but also garbage collection induces the update propagation problem. The proposal, PBGC, exploits a proxy block and a block mapping table to solve the update propagation problem, which is caused by the changes in the page and block caused by garbage collection. Experiments show that PBGC decreased the execution time of garbage collection by up to 39%, compared with previous garbage collection techniques.
Designing secure revocable storage systems for a large number of users in a cloud-based environment is important. Cloud storage systems should allow its users to dynamically join and leave the storage service. Further, the rights of the users to access the data should be changed accordingly. Recently, Liang et al. proposed a cloud-based revocable identity-based proxy re-encryption (CR-IB-PRE) scheme that supports user revocation and delegation of decryption rights. Moreover, to reduce the size of the key update token, they employed a public key broadcast encryption system as a building block. In this paper, we show that the CR-IB-PRE scheme with the reduced key update token size is not secure against collusion attacks.
In this paper, a new approach to grammatical evolution is presented. The aim is to generate complete programs using probabilistic modeling and sampling of (probability) distribution of given grammars. To be exact, probabilistic context free grammars are employed and a modified mapping process is developed to create new individuals from the distribution of grammars. To consider problem structures in the individual generation, conditional dependencies between production rules are incorporated into the mapping process. Experiments confirm that the proposed algorithm is more effective than existing methods.
Transfer learning extracts useful information from the related source domain and leverages it to promote the target learning. The effectiveness of the transfer was affected by the relationship among domains. In this paper, a novel multi-source transfer learning based on multi-similarity was proposed. The method could increase the chance of finding the sources closely related to the target to reduce the “negative transfer” and also import more knowledge from multiple sources for the target learning. The method explored the relationship between the sources and the target by multi-similarity metric. Then, the knowledge of the sources was transferred to the target based on the smoothness assumption, which enforced that the target classifier shares similar decision values with the relevant source classifiers on the unlabeled target samples. Experimental results demonstrate that the proposed method can more effectively enhance the learning performance.
Because wavelet transforms have the characteristic of decomposing signals that are similar to the human acoustic system, speech enhancement algorithms that are based on wavelet shrinkage are widely used. In this paper, we propose a new speech enhancement algorithm of hearing aids based on wavelet shrinkage. The algorithm has multi-band threshold value and a new wavelet shrinkage function for recursive noise reduction. We performed experiments using various types of authorized speech and noise signals, and our results show that the proposed algorithm achieves significantly better performances compared with other recently proposed speech enhancement algorithms using wavelet shrinkage.
In interactive audio services, users can render audio objects rather freely to match their desires and the spatial audio object coding (SAOC) scheme is fairly good both in the sense of bitrate and audio quality. But rather perceptible audio quality degradation can occur when an object is suppressed or played alone. To complement this, the SAOC scheme with Two-Step Coding (SAOC-TSC) was proposed. But the bitrate of the side information increases two times compared to that of the original SAOC due to the bitrate needed for the residual coding used to enhance the audio quality. In this paper, an efficient residual coding method of the SAOC-TSC is proposed to reduce the side information bitrate without audio quality degradation or complexity increase.
The current high efficiency video coding (HEVC) standard is developed to achieve greatly improved compression performance compared with the previous coding standard H.264/AVC. It adopts a quadtree based picture partition structure to flexibility signal various texture characteristics of images. However, this results in a dramatic increase in computational complexity, which obstructs HEVC in real-time application. To alleviate this problem, we propose a fast coding unit (CU) size decision algorithm in HEVC intra coding based on consideration of the depth level of neighboring CUs, distribution of rate distortion (RD) value and distribution of residual data. Experimental results demonstrate that the proposed algorithm can achieve up to 60% time reduction with negligible RD performance loss.
Mass-market head mounted displays (HMDs) are currently attracting a wide interest from consumers because they allow immersive virtual reality (VR) experiences at an affordable cost. Flying over a virtual environment is a common application of HMD. However, conventional keyboard- or mouse-based interfaces decrease the level of immersion. From this motivation, we design three types of immersive gesture interfaces (bird, superman, and hand) for the flyover navigation. A Kinect depth camera is used to recognize each gesture by extracting and analyzing user's body skeletons. We evaluate the usability of each interface through a user study. As a result, we analyze the advantages and disadvantages of each interface, and demonstrate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in an HMD based VR environment.