Large-scale network and multimedia application LSIs include application specific arithmetic units. A multiply-accumulator unit or a MAC unit which is one of these optimized units arranges partial products and decreases carry propagations. However, there is no method similar to MAC to execute “subtract-multiplication”. In this paper, we propose a high-speed subtract-multiplication unit that decreases latency of a subtract operation by bit-level transformation using selector logics. By using bit-level transformation, its partial products are calculated directly. The proposed subtract-multiplication units can be applied to any types of systems using subtract-multiplications and a butterfly operation in FFT is one of their suitable applications. We apply them effectively to Radix-2 butterfly units and Radix-4 butterfly units. Experimental results show that our proposed operation units using selector logics improves the performance by up to 13.92%, compared to a conventional approach.
This paper presents an exact method which finds the minimum factored form of an incompletely specified Boolean function. The problem is formulated as a Quantified Boolean Formula (QBF) and is solved by general-purpose QBF solver. We also propose a novel graph structure, called an X-B (eXchanger Binary) tree, which compactly and implicitly enumerates binary trees. Leveraged by this graph structure, the factoring problem is transformed into a QBF. Using three sets of benchmark functions: artificially-created, randomly-generated and ISCAS 85 benchmark functions, we empirically demonstrate the quality of the solutions and the runtime complexity of the proposed method.
As process technology is scaled down, a large-capacity SRAM will be used. Its power must be lowered. The Vth variation of the deep-submicron process affects the SRAM operation and its power. This paper compares the macro area, readout power, and operating frequency among dual-port SRAMs: an 8T SRAM, 10T single-end SRAM, and 10T differential SRAM considering the multi-media applications. The 8T SRAM has the lowest transistor count, and is the most area efficient. However, the readout power becomes large and the access time increases because of peripheral circuits. The 10T single-end SRAM, in which a dedicated inverter and transmission gate are appended as a single-end read port, can reduce the readout power by 74%. The operating frequency is improved by 195%, over the 8T SRAM. However, the 10T differential SRAM can operate fastest (256% faster than the 8T SRAM) because its small differential voltage of 50mV achieves high-speed operation. In terms of the power efficiency, however, the readout current is affected by the Vth variation and the timing of sense cannot be optimized singularly among all memory cells in a 45-nm technology. The readout power remains 34% lower than that of the 8T SRAM (33% higher than the 10T single-end SRAM); even its operating voltage is the lowest of the three. The 10T single-end SRAM always consumes less readout power than the 8T or 10T differential SRAM.
We propose a microphone array network that realizes ubiquitous sound acquisition. Several nodes with 16 microphones are connected to form a novel huge sound acquisition system, which carries out voice activity detection (VAD), sound source localization, and sound enhancement. The three operations are distributed among nodes. Using the distributed network, we produce a low-traffic data-intensive array network. To manage node power consumption, VAD is implemented. The system uses little power when speech is not active. For sound localization, a network-connected multiple signal classification (MUSIC) algorithm is used. The experimental result of the sound-source enhancement shows a signal-noise ratio (SNR) improvement of 7.75dB using 112 microphones. Network traffic is reduced by 99.11% when using 1,024 microphones.
In this paper, the effectiveness of deterministic Multi-step Crossover Fusion (dMSXF) and deterministic Multi-step Mutation Fusion (dMSMF), which are types of genetic multistep searches based on a neighborhood search mechanism, in solving an unsupervised design problem of suitable structuring elements (SEs) of a morphological filter is shown. In our previous work, it was shown that dMSXF and dMSMF are very effective for solving combinatorial optimization problems, particularly on problems for which the landscape is an AR(1) landscape observed in the NK model. In addition, their effectiveness for reproduction mechanisms to obtain the offspring was shown to be retained with increasing level of epistasis. In this paper, we show that a characteristic of the AR(1) landscape is observed in an objective function for the unsupervised design of SEs, and superior search performances of both dMSXF and dMSMF for conventional crossover are shown. The processing results of the obtained SEs are also compared with those of conventional filters used for impulse noise removal.
The importance of non-coding RNAs and their informatics tools has grown for a decade due to a drastic increase of known non-coding RNAs. RNA sequence alignment is one of the most important technologies in such informatics tools. Recently, we have proposed a multi-objective genetic algorithm, Cofolga2mo, for obtaining an approximate set of weak Pareto optimal solutions for global pairwise RNA sequence alignment, where a sequence similarity and a secondary structure contribution are taken into account as objective functions. In the present study, we have developed a web server for obtaining RNA sequence alignments by Cofolga2mo and for assisting the decision making from the alignments. Furthermore, we introduced an index for reducing the number of alignments output by Cofolga2mo. As a result, we successfully reduced the maximum number of alignments for an input RNA sequence pair from fifty to ten without a significant loss of accurate alignments. By using the BRAliBase 2.1 benchmark dataset, we show that a set of alignments output by Cofolga2mo for an input RNA sequence pair, which has at most ten alignments, includes an accurate alignment compared to those of the previous mono-objective RNA sequence alignment programs.
Path relinking is a population-based heuristic that explores the trajectories in decision space between two elite solutions. It has been successfully used as a key component of several multi-objective optimizers, especially for solving bi-objective problems. Its unique characteristic of performing the search in the objective and decision spaces makes it interesting to study its behavior in many objective optimization. In this paper, we focus on the behavior of pure path relinking, propose several variants of the path relinking that vary on their strategies of selecting solutions, and analyze its performance using several many-objective NK-landscapes as instances. In general, results of the study show that the path relinking becomes more effective in improving the convergence of the algorithm as we increase the number of objectives. Also, it is shown that the selection strategy associated to path relinking plays an important role to emphasize either convergence or spread of the algorithm. This study provides useful insights for practitioners on how to exploit path relinking to enhance multi-objective evolutionary algorithms for complex combinatorial optimization problems.
Node-perturbation learning (NP-learning) is a kind of statistical gradient descent algorithm that estimates the gradient of an objective function through application of a small perturbation to the outputs of the network. It can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. In this paper, we show that node-perturbation learning can be formulated as on-line learning in a linear perceptron with noise, and we can derive the differential equations of order parameters and the generalization error in the same way as for the analysis of learning in a linear perceptron through statistical mechanical methods. From analytical results, we show that cross-talk noise, which originates in the error of the other outputs, increases the generalization error as the output number increases.
This paper presents an adaptive group-based job scheduling method for credibility-based sabotage-tolerance techniques in volunteer computing (VC) systems. Credibility-based technique is a promising approach for reliable VC systems since it guarantees computational correctness mathematically based on the credibility of participants. Check-by-voting reduces the cost of checking credibility in credibility-based technique. However, in some applications where the deadline of the computation is relatively short, current job scheduling methods do not work well for check-by-voting and significantly degrade performance. To improve the performance of VCs, the proposed job scheduling method adaptively groups participants based on the expected-credibility to take into account the participants under job execution. Simulation of VCs shows that the proposed method always outperforms current job scheduling methods regardless of the values of unknown parameters such as population and behavior of saboteurs.
In service composition environments, users and service entity hosts are always geometrically distributed. Therefore the performance of the service response might be poor when users invoke services that are physically far from them. Such issues are difficult to be solved with traditional caching technologies in the areas of contents delivery network because service providers do not always allow their service entities to be copied to all service entity hosts. In this paper, we deal with the service invocation control problem considering the above issues. First, we formally model the service invocation problem in service composition environments. Then we design several dynamic service invocation control mechanisms to improve the response performance of atomic services and composite services. The evaluation results show that (1) the mechanism for atomic services that considers both potential users for most service invocation requests and potential users for continuous requests can best improve the response performance; (2) the mechanism for composite services that considers the group characteristics of atomic services can improve the response performance more than other mechanisms; and (3) our proposed dynamic mechanisms can bring a stable response performance from the perspective of users.
Suenaga, et al. have developed a type-based framework for automatically translating tree-processing programs into stream-processing ones. The key ingredient of the framework was the use of ordered linear types to guarantee that a tree-processing program traverses an input tree just once in the depth-first, left-to-right order (so that the input tree can be read from a stream). Their translation, however, sometimes introduces redundant buffering of input data. This paper extends their framework by introducing ordered, non-linear types in addition to ordered linear types. The resulting transformation framework reduces the redundant buffering, generating more efficient stream-processing programs.
An SQL injection attack is one of the most serious security threats to web applications. It allows an attacker to access the underlying database and execute arbitrary commands, which may lead to sensitive information disclosure. The primary way to prevent SQL injection attacks is to sanitize the user-supplied input. However, this is usually performed manually by developers and so is a laborious and error-prone task. Although security tools assist the developers in verifying the security of their web applications, they often generate a number of false positives/negatives. In this paper, we present our technique called Sania, which performs efficient and precise penetration testing by dynamically generating effective attacks through investigating SQL queries. Since Sania is designed to be used in the development phase of web applications, it can intercept SQL queries. By analyzing the SQL queries, Sania automatically generates precise attacks and assesses the security according to the context of the potentially vulnerable slots in the SQL queries. We evaluated our technique using real-world web applications and found that our solution is efficient. Sania generated more accurate attacks and less false positives than popular web application vulnerability scanners. We also found previously unknown vulnerabilities in a commercial product that was just about to be released and in open-source web applications.
Network coordinates (NCs) enable the efficient and accurate estimation of network latency by mapping the geographical relationship among all nodes to Euclidean space. Many researchers have proposed NC-based strategies to reduce the lookup latency of distributed hash tables (DHTs). However, these strategies are limited in the improvement of the lookup latency; the nearest node to which a query should be forwarded is not always included in the consideration scope of a node. This is because conventional latency improvement strategies assign node IDs independent of the underlying physical network and still have the possibility of detour routing. In this paper, we propose an NC-based method of constructing a topology-aware DHT by Proximity Identifier Selection (PIS/NC). PIS/NC constructs a logical ID space of a DHT from the Euclidean space constructed by NCs; a node ID corresponds to the network coordinate of the node. By doing this, the consideration scope of a node always contains the nearest node, thus, we can expect a great reduction in lookup latency. Unlike the conventional PIS strategy that poses unavoidable issues due to uneven ID distribution, PIS/NC tries to moderate these issues by a simple optimization, provided by a PIS/NC stabilizer. The PIS/NC stabilizer detects an uneven distribution of node IDs locally, and then recalculates some IDs so that the unevenness is moderated. As case studies, this paper presents Canary and Harpsichord, which are PIS/NC-based CAN and Chord, respectively. Simulation results show that PIS/NC-based DHTs improve lookup latency. Under the environment using the Transit-Stub model, where SAT-Match and DHash++ are only able to reduce the median lookup latency by 19% of CAN and 9% of Chord, respectively, Canary and Harpsichord reduce it by 40% and 35%, respectively. We also verify that the PIS/NC stabilizer moderates the non-uniform distribution of node IDs.
In this paper we propose DTS (Distributed TCP Splicing), a new mechanism for performing content-aware TCP connection switching in a broadcast-based single IP address cluster. Broadcast-based design enables each cluster node to continue to provide services to clients even when other nodes in the cluster fail. Each connection request from a client is first distributed among the cluster using the consistent hashing method, in order to share the request inspection workload. Then the connection is transferred to an appropriate node according to the content of the request. DTS is implemented on the Linux kernel module and does not require any modification to the main kernel code, server applications, or client applications. With an 8-node server configuration, a DTS cluster with multiple request inspectors achieves about 3.4 times higher connection throughput compared to the single inspector configuration. A SPECweb 2005 Support benchmark is also conducted with a four node cluster, where DTS reduces the total amount of disk accesses with a locality-aware request distribution and almost halves the number of file downloads that fail to meet the speed requirement.
Idle resources can be exploited not only to run important local tasks such as data backup and virus checking, but also to make contributions to society by participating in distributed computing projects. When executing background processes to utilize such valuable idle resources, we need to control them explicitly to avoid foreground performance degradation. Otherwise, the user will be discouraged from exploiting idle resources. In this paper, we show that we can detect resource contention between foreground and background processes and properly control background process execution at the user level, without modifications to the underlying operating system or user applications. We infer resource contention from changes in the approximated resource shares of background processes. In deriving those resource shares, our approach takes advantage of dynamically enabled probes. Also, it takes account of different resource types and can handle multiple background processes with varied resource needs. Our experiments show that our system keeps the increase in foreground execution time due to background processes below 16.9% - often much lower in most of our experiments.
The static dependency pair method is a method for proving the termination of higher-order rewrite systems à la Nipkow. It combines the dependency pair method introduced for first-order rewrite systems with the notion of strong computability introduced for typed λ-calculi. Argument filterings and usable rules are two important methods of the dependency pair framework used by current state-of-the-art first-order automated termination provers. In this paper, we extend the class of higher-order systems on which the static dependency pair method can be applied. Then, we extend argument filterings and usable rules to higher-order rewriting, hence providing the basis for a powerful automated termination prover for higher-order rewrite systems.
This article provides a mathematical formula for determining the optimal sizes of two different sized spheres to maximize the packing density when randomized loose packing is employed in containers with various shapes. The formula was evaluated with numerous computer simulations involving over a million of spheres.
In this paper, we propose a semi-automatic depth estimation algorithm for Free-viewpoint TV (FTV). The proposed method is an extension of an automatic depth estimation method whereby additional manually created data is input for one or multiple frames. Automatic depth estimation methods generally have difficulty obtaining good depth results around object edges and in areas with low texture. The goal of our method is to improve the depth in these areas and reduce view synthesis artifacts in Depth Image Based Rendering. High-quality view synthesis is very important in applications such as FTV and 3DTV. We define three types of manual input data providing disparity initialization, object segmentation information, and motion information. This data is input as images, which we refer to as manual disparity map, manual edge map, and manual static map, respectively. For evaluation, we used MPEG multi-view videos to demonstrate that our algorithm can significantly improve the depth maps and, as a result, reduce view synthesis artifacts.
Probabilistic classification and multi-task learning are two important branches of machine learning research. Probabilistic classification is useful when the ‘confidence’ of decision is necessary. On the other hand, the idea of multi-task learning is beneficial if multiple related learning tasks exist. So far, kernelized logistic regression has been a vital probabilistic classifier for the use in multi-task learning scenarios. However, its training tends to be computationally expensive, which prevented its use in large-scale problems. To overcome this limitation, we propose to employ a recently-proposed probabilistic classifier called the least-squares probabilistic classifier in multi-task learning scenarios. Through image classification experiments, we show that our method achieves comparable classification performance to the existing method, with much less training time.
A method for detecting moving objects using a Markov random field (MRF) model is proposed, based on background subtraction. We aim at overcoming two major drawbacks of existing methods: dynamic background changes such as swinging trees and camera shaking tend to yield false positives, and the existence of similar colors in objects and their backgrounds tends to yield false negatives. One characteristic of our method is the background subtraction using the nearest neighbor method with multiple background images to cope with dynamic backgrounds. Another characteristic is the estimation of object movement, which provides robustness for similar colors in objects and background regions. From the viewpoint of the MRF, we define the energy function by considering these characteristics and optimize the function by graph cut. In most cases of our experiments, the proposed method can be implemented in (nearly) real time, and experimental results show favorable detection performance even in difficult cases in which methods of previous studies have failed.
Now video surveillance systems are being widely used, the capability of extracting moving objects and estimating moving object density from video sequences is indispensable for these systems. This paper proposes some new techniques of crowded objects motion analysis (COMA) to deal with crowded objects scenes, which consist of three parts: background removal, foreground segmentation, and crowded objects density estimation. To obtain optimal foregrounds, a combination approach of Lucas-Kanade optical flow and Gaussian background subtraction is proposed. For foreground segmentation, we put forward an optical flow clustering approach, which segments different crowded object flows, and then a block absorption approach to deal with the small blocks produced during clustering. Finally, we extract a set of 15 features from the foreground flows and estimate the density of each foreground flow. We employ self organizing maps to reduce the dimensions of the feature vector and to be a final classifier. Some experimental results prove that the proposed technique is useful and efficient.
View synthesis using depth maps is a crucial application for Free-viewpoint TV (FTV). The depth estimation based on stereo matching is error-prone, leading to noticeable artifacts in the synthesized new views. To provide high-quality virtual views for FTV, we innovatively introduce a probabilistic framework that constrains the reliability of each synthesized pixel by Maximizing Likelihood (ML). Our spatial adaptive reliability is provided by incorporating Gamma hyper-prior and the synthesis error approximation using reference crosscheck1). Furthermore, we formulate view synthesis in the framework of Maximum a Posterior (MAP). For the outputs, two versions of the synthesized view are generated: the solution with ML criterion and the solution with MAP criterion, solved by straightforward interpolation and graph cuts, respectively. We experimentally demonstrate the effectiveness of both solutions with MPEG standard test sequences. The results show that the proposed method outperforms state-of-the-art depth based view synthesis methods, both in terms of subjective artifact reduction and objective PSNR improvement.
The technologies of Cloud Computing and NGN are now growing a paradigm shift where various services are provided to business users over the network. In conjunction with this movement, many studies are active to realize a ubiquitous computing environment in which a huge number of individual users can share their computing resources on the Internet, such as personal computers (PCs), game consoles, sensors and so on. To realize an effective resource discovery mechanism for such an environment, this paper presents an adaptive overlay network that enables a self-organizing resource management system to efficiently adapt to a heterogeneous environment. The proposed mechanism is composed of two functions. One is to adjust the number of logical links of a resource, which forward search queries so that less-useful query flooding can be reduced. The other is to connect resources so as to decrease the communication latency on the physical network rather than the number of query hops on an overlay network. To further improve the discovery efficiency, this paper integrates these functions into a self-organizing resource management system, SORMS, which has been proposed in our previous work. The simulation results indicate that the proposed mechanism can increase the number of discovered resources by 60% without decreasing the discovery efficiency, and can reduce the total communication traffic by 80% compared with the original SORMS. This performance improvement is obtained by efficient control of logical links in a large scale network.
Web tracking sites or Web bugs are potential but serious threats to users' privacy during Web browsing. Web sites and their associated advertising sites surreptitiously gather the profiles of visitors and possibly abuse or improperly expose them, even if visitors are unaware their profiles are being utilized. In order to prevent such sites in a corporate network, most companies employ filters that rely on blacklists, but these lists are insufficient. In this paper, we propose Web tracking sites detection and blacklist generation based on temporal link analysis. Our proposal analyzes traffic at the network gateway so that it can monitor all tracking sites in the administrative network. The proposed algorithm constructs a graph between sites and their visited time in order to characterize each site. Then, the system classifies suspicious sites using machine learning. We confirm that public black lists contain at most 22-70% of the known tracking sites respectively. The machine learning can identify the blacklisted sites with true positive rate, 62-73%, which is more accurate than any single blacklist. Although the learning algorithm falsely identified 15% of unlisted sites, 96% of these are verified to be unknown tracking sites by means of a manual labeling. These unknown tracking sites can serve as good candidates for an entry of a new backlist.
Many users are attracted by online social media such as Delicious and Digg, and they put tags on online resources. Relations among users, tags, and resources are represented as a tripartite network composed of three types of vertices. Detecting communities (densely connected subnetworks) from such tripartite networks is important for finding similar users, tags, and resources. For unipartite networks, several attempts have been made for detecting communities, and one of the popular approaches is to optimize modularity, a measurement for evaluating the goodness of network divisions. Modularity for bipartite networks is proposed by Barber, Guimera, Murata and Suzuki. However, as far as the author knows, there is few attempt for defining modularity for tripartite networks. This paper defines a new tripartite modularity which indicates the correspondence between communities of three vertex types. By optimizing the value of our tripartite modularity, better community structures can be detected from synthetic tripartite networks.
Cognitive radio (CR) technology has been offered to improve efficiency in bandwidth use and the quality of service (QoS) of heterogeneous wireless networks with varied types of radio systems. As the CR network system grows, network security has been raised as an area of concern. The topic has yet to be fully considered and no suitable authentication methods have been identified. In this paper, we propose a radio-free mutual authentication protocol for the CR network. The protocol, named EAP-CRP, adopts the location information of a mobile terminal as a shared secret for authentication. EAP-CRP has been designed to satisfy the requirements of wireless network security, confidentiality, integrity and availability, and is realized as a lightweight and quick-responding mutual authentication protocol.
In this paper, we propose an extended method of our previous mobile sensor control method, named DATFM (Data Acquisition and Transmission with Fixed and Mobile node). DATFM uses two types of sensor nodes, fixed node and mobile node. The data acquired by nodes are accumulated on a fixed node before being transferred to the sink node. The extended method, named DATFM/DF (DATFM with deliberate Deployment of Fixed nodes), strategically deploys sensor nodes based on the analysis of the performance of DATFM in order to improve the efficiency of sensing and data gathering. We also conduct simulation experiments to evaluate the performance of DATFM/DF.
MANET for NEMO (MANEMO) is a new type of network that integrates multi-hop mobile wireless networks with global connectivity provided by Network Mobility (NEMO). Two factors limit the scalability of MANEMO: the volatility of topologically correct global addresses, and excessive traffic load caused by inefficient use of nested tunnels and the consequent redundant routing of packets. We propose NAT-MANEMO, which solves both problems by applying NAT for some mobile router addresses, bypassing tunnel nesting. This approach retains global addresses for mobile end nodes, preserving application transparency, and requires only minimal modification to existing specifications. Our ideas are evaluated using simulation and a proof of concept implementation. The simulation shows the additional signaling overhead for the route optimization introduced by our proposal is negligible compare to the bandwidth of an IEEE 802.11 link. The implementation confirms that route optimization reduces latency and improves throughput.
The present study investigates a Medium Access Control (MAC) protocol for reliable inter-vehicle communications (IVC) to support safe driving with the goal of reducing road traffic accidents. A number of studies have evaluated the performance of the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol. However, the communication quality provided by the CSMA/CA protocol is seriously degraded by the hidden terminal problem in IVC. Therefore, we propose a new MAC protocol, referred to as Periodic Broadcast-Timing Reservation Multiple Access (PB-TRMA), which can autonomously control transmission timing and avoid packet collisions by enhancing the Network Allocation Vector (NAV) for periodic broadcast communications. The simulation results show that the proposed protocol can resolve the hidden terminal problem and mitigate data packet collisions. Moreover, the behavior of PB-TRMA is similar to that of Time-Division Multiple Access (TDMA). In addition, we show that two procedures, namely, packet collision retrieval and hidden terminal detection, are essential ingredients in PB-TRMA in order to achieve high quality performance.
The use of public Malware Sandbox Analysis Systems (public MSASs) which receive online submissions of possibly malicious files or URLs from an arbitrary user, analyze their behavior by executing or visiting them by a testing environment (i.e., a sandbox), and send analysis reports back to the user, has increased in popularity. Consequently, anti-analysis techniques have also evolved from known technologies like anti-virtualization and anti-debugging to the detection of specific sandboxes by checking their unique characteristics such as a product ID of their OS and a usage of certain Dynamic Link Library (DLL) used in a particular sandbox. In this paper, we point out yet another important characteristic of the sandboxes, namely, their IP addresses. In public MSASs, the sandbox is often connected to the Internetin order to properly observe malware behavior as modern malware communicate with remote hosts in the Internet for various reasons, such as receiving command and control (C&C) messages and files for updates. We explain and demonstrate that the IP address of an Internet-connected sandbox can be easily disclosed by an attacker who submits a decoy sample dedicated to this purpose. The disclosed address can then be shared among attackers, blacklisted, and used against the analysis system, for example, to conceal potential malicious behavior of malware. We call the method Network-based Sandbox Detection by Decoy Injection (NSDI). We conducted case studies with 15 representative existing public MSASs, which were selected from 33 online malware analysis systems with careful screening processes, and confirmed that a hidden behavior of the malware samples was successfully concealed from all of the 15 analysis systems by NSDI. In addition, we found out the risk that a background analysis activity behind these systems can also be revealed by NSDI if the samples are shared among the systems without careful considerations. Moreover, about three months after our first case study it was reported that a real-world NSDI was conducted against several public MSASs.
As a bot communicates with a malicious controller over a normal communication or an encrypted channel and updates its code frequently, it becomes difficult to detect an infected personal computer (PC) using a signature-based intrusion detection system (IDS) and an antivirus system (AV). As sending control and attack packets from the bot process are independent of the user operation, a behavior monitor is effective in detecting an anomaly communication. In this paper, we propose a bot detection technique that checks outbound packets with destination-based whitelists. If any outbound packets during the non-operating duration do not match the whitelists, the PC is considered to be infected by the bot. The whitelists are a set of legitimate IP addresses (IPs) and/or domain names (DNs). We implement the proposal system as a host-based detector and evaluate false negative (FN) and false positive (FP) performance.
In this paper, we propose a new distributed hit-list worm detection method: the Anomaly Connection Tree Method with Distributed Sliding Window (ACTM-DSW). ACTM-DSW employs multiple distributed network Intrusion Detection Systems (IDSs), each of which monitors a small portion of an enterprise network. In ACTM-DSW, worm propagation trees are detected by using a sliding time window. More precisely, the distributed IDSs in ACTM-DSW cooperatively detect tree structures composed of the worm's infection connections that have been made within a time window. Through computer-based simulations, we demonstrate that ACTM-DSW outperforms an existing distributed worm detection method, called d-ACTM/VT, for detecting worms whose infection intervals are not constant, but rather have an exponential or uniform distribution. In addition, we implement the distributed IDSs on Xen, a virtual machine environment, and demonstrate the feasibility of the proposed method experimentally.