Content oriented network is expected to be one of the most promising approaches for resolving design concept difference between content oriented network services and location oriented architecture of current network infrastructure. There have been proposed several content oriented network architectures, but research efforts for content oriented networks have just started and technical issues to be resolved are still remained. Because of content oriented feature, content data transmitted in a network can be reused by content requests from other users. Pervasive cache is one of the most important benefits brought by the content oriented network architecture, which forms interconnected caching networks. Caching network is the hottest research area and lots of research activities have been published. This paper surveys recent research activities for caching networks in content oriented networks, with focusing on important factors which affect caching network performance, i.e. content request routing, caching decision, and replacement policy of cache. And this paper also discusses future direction of caching network researches.
Named Data Networking (NDN) is a proposed future Internet architecture that shifts the fundamental abstraction of the network from host-to-host communication to request-response for named, signed data-an information dissemination focused approach. This paper describes a general design for receiver-driven, real-time streaming data (RTSD) applications over the current NDN implementation that aims to take advantage of the architecture's unique affordances. It is based on experimental development and testing of running code for real-time video conferencing, a positional tracking system for interactive multimedia, and a distributed control system for live performance. The design includes initial approaches to minimizing latency, managing buffer size and Interest retransmission, and adapting retrieval to maximize bandwidth and control congestion. Initial implementations of these approaches are evaluated for functionality and performance results, and the potential for future research in this area, and improved performance as new features of the architecture become available, is discussed.
These days, in addition to host-to-host communication, Information-Centric Network (ICN) has emerged to reflect current content-centric network usage, based on the fact that many users are now interested not in where contents are but in acquired contents themselves. However, current IP network must still remain, at least from deployment perspective, as one of near future network architectures. This is because ICN has various scalability and feasibility challenges, and host-to-host communication is also diffused like remote login, VoIP, and so on. Therefore, the authors aim to establish the feature of ICN on conventional IP network to achieve feasible and efficient architecture. We consider that, as a feasible and efficient architecture, only user-edges keep some contents' caches within their computational and bandwidth limitations and contents should be replicated also on some replica servers dispersedly to assure contents' distribution even if user caches are not found. To achieve this, in this paper, we propose to operate Content Delivery Network (CDN) and Breadcrumbs (BC) frameworks coordinately on IP network. Both CDN and BC are important as a content-centric technique. In CDN, replica servers called surrogates are placed dispersedly in all over the Internet. Although this provides users with contents from nearer surrogate servers, the surrogate servers have higher workload to distribute contents to many users. In the proposed method, in cooperation with BC method that is proposed to implement ICN on IP network, the surrogate server workload is drastically reduced without largely increasing hop count for content delivery. Although it needs some functions to implement our approach such as adopting BC architecture to routers, calculating and reporting information required for cooperation of BC method with CDN, the cost for the functions in our solution is not so significant. Finally, we evaluate the proposed method with CDN we carefully modeled through simulation.
On SNS (Social Networking Services), detecting Sybils is an urgent demand. The most famous approach is called “SybilRank” scheme where each node evenly distributes its trust value starting from honest seeds and detects Sybils based on the trust value. Furthermore, Zhang et al. propose to avoid trust values from being distributed into Sybils by pruning suspicious relationships before performing SybilRank. However, we point out that the above two schemes have shortcomings that must be remedied. In the former scheme, seeds are concentrated on the specific communities because they are selected from nodes that have largest number of friends, and thus the trust value is not evenly distributed. In the latter one, a sophisticated attacker can avoid graph pruning by making relationships between Sybil nodes. In this paper, we propose a robust seed selection and graph pruning scheme to detect Sybil nodes more accurately. To more evenly distribute trust value into honest nodes, we first detect communities in the SNS and select honest seeds from each detected community. And then, by leveraging the fact that Sybils cannot make dense relationships with honest nodes, we also propose a graph pruning scheme based on the density of relationships between trusted nodes. We prune the relationships which have sparse relationships with trusted nodes and this enables robust pruning malicious relationships even if the attackers make a large number of common friends. By the computer simulation with real dataset, we show that our scheme improves the detection accuracy of both Sybil and honest nodes.
Ever-evolving malware makes it difficult to prevent it from infecting hosts. Botnets in particular are one of the most serious threats to cyber security, since they consist of a lot of malware-infected hosts. Many countermeasures against malware infection, such as generating network-based signatures or templates, have been investigated. Such templates are designed to introduce regular expressions to detect polymorphic attacks conducted by attackers. A potential problem with such templates, however, is that they sometimes falsely regard benign communications as malicious, resulting in false positives, due to an inherent aspect of regular expressions. Since the cost of responding to malware infection is quite high, the number of false positives should be kept to a minimum. Therefore, we propose a system to generate templates that cause fewer false positives than a conventional system in order to achieve more accurate detection of malware-infected hosts. We focused on the key idea that malicious infrastructures, such as malware samples or command and control, tend to be reused instead of created from scratch. Our research verifies this idea and proposes here a new system to profile the variability of substrings in HTTP requests, which makes it possible to identify invariable keywords based on the same malicious infrastructures and to generate more accurate templates. The results of implementing our system and validating it using real traffic data indicate that it reduced false positives by up to two-thirds compared to the conventional system and even increased the detection rate of infected hosts.
Flow classification is of great significance for network management. Machine-learning-based flow classification is widely used nowadays, but features which depict the non-Gaussian characteristics of network flows are still absent. In this paper, we propose the Windowed Higher-order Statistical Analysis (WHOSA) for machine-learning-based flow classification. In our methodology, a network flow is modeled as three different time series: the flow rate sequence, the packet length sequence and the inter-arrival time sequence. For each sequence, both the higher-order moments and the largest singular values of the Bispectrum are computed as features. Some lower-order statistics are also computed from the distribution to build up the feature set for contrast, and C4.5 decision tree is chosen as the classifier. The results of the experiment reveals the capability of WHOSA in flow classification. Besides, when the classifier gets fully learned, the WHOSA feature set exhibit stronger discriminative power than the lower-order statistical feature set does.
To efficiently monitor the link performance in an OpenFlow network with a single measurement box (referred to a “beacon”), this paper presents a measurement scheme that calculates a set of measurement paths from the beacon to cover all links in the network based on the controllable feature of individual measurement paths in the OpenFlow network and comprehensively estimates the performance of all the physical links from round-trip active measurements. The scheme has a novel feature that minimize the maximum number of exclusive flow-entries for active measurements on OpenFlow switches by utilizing common packet header values in the probing packets to aggregate multiple entries into a single entry to save the resources in OpenFlow switches and controller. We demonstrate the effectiveness and feasibility of our solution through simulations and emulation scenarios.
Although SDN provides desirable characteristics such as the manageability, flexibility and extensibility of the networks, it has a considerable disadvantage in its reliability due to its centralized architecture. To protect SDN-enabled networks under large-scale, unexpected link failures, we propose ResilientFlow that deploys distributed modules called Control Channel Maintenance Module (CCMM) for every switch and controllers. The CCMMs makes switches able to maintain their own control channels, which are core and fundamental part of SDN. In this paper, we design, implement, and evaluate the ResilientFlow.
Quantum key distribution (QKD), a cryptography technology providing information theoretic security based on physical laws, has moved from the research stage to the engineering stage. Although the communication distance is subject to a limitation attributable to the QKD fundamentals, recent research and development of “key relaying” over a “QKD network” is overcoming this limitation. However, there are still barriers to widespread use of QKD integrated with conventional information systems: applicability and development cost. In order to break down these barriers, this paper proposes a new solution for developing secure network infrastructure based on QKD technology to accommodate multiple applications. The proposed solution introduces 3 functions: (1) a directory mechanism to manage multiple applications hosted on the QKD network, (2) a key management method to share and to allocate the keys for multiple applications, and (3) a cryptography communication library enabling existing cryptographic communication software to be ported to the QKD network easily. The proposed solution allows the QKD network to accommodate multiple applications of various types, and moreover, realizes applicability to conventional information systems easily. It also contributes to a reduction in the development cost per information system, since the development cost of the QKD network can be shared between the multiple applications. The proposed solution was implemented with a network emulating QKD technology and evaluated. The evaluation results show that the proposed solution enables the infrastructure of a single QKD network to host multiple applications concurrently, fairly, and effectively through a conventional application programming interface, OpenSSL API. In addition, the overhead of secure session establishment by the proposed solution was quantitatively evaluated and compared.
In this paper, we present a faster (wall-clock time) sorting method for numerical data subjected to fully homomorphic encryption (FHE). Owing to circuit-based construction and the FHE security property, most existing sorting methods cannot be applied to encrypted data without significantly compromising efficiency. The proposed algorithm utilizes the cryptographic single-instruction multiple-data (SIMD) operation, which is supported by most existing FHE algorithms, to reduce the computational overhead. We conducted a careful analysis of the number of required recryption operations, which are the computationally dominant operations in FHE. Accordingly, we verified that the proposed SIMD-based sorting algorithm completes the given task more quickly than existing sorting methods if the number of data items and (or) the maximum bit length of each data item exceed specific thresholds.
This paper proposes 1-bit feedforward distortion compensation for digital radio frequency conversion (DRFC) with 1-bit bandpass delta-sigma modulation (BP-DSM). The 1-bit BP-DSM allows direct RF signal transmission from a digitally modulated signal. However, it has been previously reported that 1-bit digital pulse trains with non-ideal rectangle waveform cause spectrum regrowth. The proposed architecture adds a feedforward path with another 1-bit BP-DSM and so can cancel out the distortion components at any target carrier frequency. Both the main signal and the distortion compensation signal are 1-bit digital pulse trains and so no additional analog RF circuit is required for distortion compensation. Simulation results show that the proposed method holds the adjacent channel leakage ratio to 60dB for LTE signal transmission. A prototype of the proposed 1-bit DRFC with an additional 1-bit BP-DSM in the feedforward path shows an ACLR of 50dB, 4dB higher than that of the conventional 1-bit DRFC.
Because accurate position information plays an important role in wireless sensor networks (WSNs), target localization has attracted considerable attention in recent years. In this paper, based on target spatial domain discretion, the target localization problem is formulated as a sparsity-seeking problem that can be solved by the compressed sensing (CS) technique. To satisfy the robust recovery condition called restricted isometry property (RIP) for CS theory requirement, an orthogonalization preprocessing method named LU (lower triangular matrix, unitary matrix) decomposition is utilized to ensure the observation matrix obeys the RIP. In addition, from the viewpoint of the positioning systems, taking advantage of the joint posterior distribution of model parameters that approximate the sparse prior knowledge of target, the sparse Bayesian learning (SBL) approach is utilized to improve the positioning performance. Simulation results illustrate that the proposed algorithm has higher positioning accuracy in multi-target scenarios than existing algorithms.
Cognitive radio sensor networks (CRSNs) with their dynamic spectrum access capability appear to be a promising solution to address the increasing challenge of spectrum crowding faced by the traditional WSN. In this paper, through maximizing the utility index of the CRSN, a node density-adaptive spectrum access strategy for sensor nodes is proposed that takes account of the node density in a certain event-driven region. For this purpose, considering the burst real-time data traffic, we analyze the energy efficiency (EE) and the packet failure rate (PFR) combining network disconnected rate (NDR) and packet loss rate (PLR) during the channel switching interval (CSI) for both underlay and interweave spectrum access schemes. Numerical results confirm the validity of our theoretical analyses and indicate that the adaptive node density threshold (ANDT) exists for underlay and interweave spectrum access scheme switching.
The Virtual Machine Consolidation (VMC) algorithm is the core strategy of virtualization resource management software. In general, VMC efficiency dictates cloud datacenter efficiency to a great extent. However, all the current Virtual Machine (VM) consolidation strategies, including the Iterative Correlation Match Algorithm (ICMA), are not suitable for the dynamic VM consolidation of the level of physical servers in actual datacenter environments. In this paper, we propose two VM consolidation and placement strategies which are called standard Segmentation Iteration Correlation Combination (standard SICC) and Multi-level Segmentation Iteration Correlation Combination (multi-level SICC). The standard SICC is suitable for the single-size VM consolidation environment and is the cornerstone of multi-level SICC which is suitable for the multi-size VM consolidation environment. Numerical simulation results indicate that the numbers of remaining Consolidated VM (CVM), which are generated by standard SICC, are 20% less than the corresponding parameters of ICMA in the single-level VM environment with the given initial condition. The numbers of remaining CVMs of multi-level SICC are 14% less than the corresponding parameters of ICMA in the multi-level VM environment. Furthermore, the used physical servers of multi-level SICC are also 5% less than the used servers of ICMA under the given initial condition.
Enforcing access control policies in Information-Centric Networking (ICN) is difficult due to there being multiple copies of contents in various network locations. Traditional Access Control List (ACL)-based schemes are ill-suited for ICN, because all potential content distribution servers should have an identical access control policy or they should contact a centralized ACL server whenever their contents are accessed by consumers. To address these problems, we propose a distributed capability access control scheme for ICN. The proposed scheme is composed of an internal capability and an external capability. The former is included in the content and the latter is added to a request message sent from the consumer. The content distribution servers can validate the access right of the consumer through the internal and external capabilities without contacting access control policies. The proposed model also enhances the privacy of consumers by keeping the content name and consumer identification anonymous. The performance analysis and implementation show that the proposed scheme is feasible and more efficient than other access control schemes.
In recent years, multiple-input multiple-output (MIMO) channel models for crowded areas, such as indoor offices, shops, and outdoor hotspot environments, have become a topic of significant interest. In such crowded environments, propagation paths are frequently shadowed by moving objects, such as pedestrians or vehicles. These shadowing effects can cause time variations in the delay and angle-of-arrival (AoA) characteristics of a channel. In this paper, we propose a method for modeling the shadowing effects of pedestrians in a cluster-based channel model. The proposed method uses cluster power variations to model the time-varying channel properties. We also propose a novel method for estimating the cluster power variation properties from measured data. In order to validate our proposed method, channel sounding in the 3GHz band is conducted in a cafeteria during lunchtime. The results for the K parameter, delay spreads, and AoA azimuth spreads are compared for the measured data and the channel data generated using the proposed method. The results indicate that the time-varying delay-AoA characteristics can be effectively modeled using our proposed method.
Coordinate interleaved orthogonal design (CIOD) using four transmit antennas provides full diversity, full rate (FDFR) properties with low decoding complexity. However, the constellation expansion due to the coordinate interleaving of the rotated constellation results in peak to average power ratio (PAPR) increase. In this paper, we propose two signal constellation design methods which have low PAPR. In the first method we propose a signal constellation by properly selecting the signal points among the expanded square QAM constellation points, based on the co-prime interleaving of the first coordinate signal. We design a regular interleaving pattern so that the coordinate distance product (CPD) after the interleaving becomes large to get the additional coding gain. In the other method we propose a novel constellation with low PAPR based on the clipping of the rotated square QAM constellation. Our proposed signal constellations show much lower PAPR than the ordinary rotated QAM constellations for CIOD.
In this paper, we propose a channel-unaware algorithm to suppress the narrowband interference (NBI) for the time synchronization, where multiple antennas are equipped at the receiver. Based on the fact that the characteristics of synchronization signal are different from those of NBI in both the time and spatial domain, the proposed algorithm suppresses the NBI by utilizing the multiple receive antennas in the eigen domain of NBI, where the eigen domain is obtained from the time domain statistical information of NBI. Because time synchronization involves incoherent detection, the proposed algorithm does not use the desired channel information, which is different from the eigen domain interference rejection combining (E-IRC). Simulation results show, compared with the traditional frequency domain NBI suppression technique, the proposed algorithm has about a 2 dB gain under the same probability of detection.
The random deployment of small cell base stations (BSs) causes the coverage areas of neighboring cells to overlap, which increases intercell interference and degrades the system capacity. This paper proposes a new intercell interference management (IIM) scheme to improve the system capacity in multiple-input multiple-output (MIMO) small cell networks. The proposed IIM scheme consists of both an interference cancellation (IC) technique on the receiver side, and a neural network (NN) based power control algorithm for intercell interference coordination (ICIC) on the transmitter side. In order to improve the system capacity, the NN power control optimizes downlink transmit power while IC eliminates interfering signals from received signals. Computer simulations compare the system capacity of the MIMO network with several ICIC algorithms: the NN, the greedy search, the belief propagation (BP), the distributed pricing (DP), and the maximum power, all of which can be combined with IC reception. Furthermore, this paper investigates the application of a multi-layered NN structure called deep learning and its pre-training scheme, into the mobile communication field. It is shown that the performance of NN is better than that of BP and very close to that of greedy search. The low complexity of the NN algorithm makes it suitable for IIM. It is also demonstrated that combining IC and sectorization of BSs acquires high capacity gain owing to reduced interference.
In this paper, we study the impact of imperfect channel information on an amplify-and-forward (AF)-based two-way relaying network (TWRN) with adaptive modulation which consists of two end-terminals and multiple relays. Specifically, we consider a single-relay selection scheme of the TWRN in the presence of outdated channel state information (CSI) and channel estimation errors. First, we choose the best relay based on outdated CSI, and perform adaptive modulation on both relaying paths with channel estimation errors. Then, we discuss the impact of the outdated CSI on the statistics of the signal-to-noise ratio (SNR) per hop. In addition, we formulate the end-to-end SNRs with channel estimation errors and offer statistic analyses in the presence of both the outdated CSI and channel estimation errors. Finally, we provide the performance analyses of the proposed TWRN with adaptive modulation in terms of average spectral efficiency, average bit error rate, and outage probability. Numerical examples are given to verify our obtained analytical results for various system conditions.
Single-carrier (SC) transmission with space-time block coded (STBC) transmit diversity can achieve good bit error rate (BER) performance. However, in a high mobility environment, the STBC codeword orthogonality is distorted and as consequence, the BER performance is degraded by the interference caused by the orthogonality distortion of STBC codeword. In this paper, we proposed a novel frequency-domain equalization (FDE) for SC-STBC transmit diversity in doubly selective fading channel. Multiple FDE weight matrices, each associated with a different code block, are jointly optimized based on the minimum mean square error (MMSE) criterion taking into account not only channel frequency variation but also channel time variation over the STBC codeword. Computer simulations confirm that the proposed robust FDE achieves BER performance superior to conventional FDE, which was designed based on the assumption of a quasi-static fading.
This paper considers the beamforming design for energy efficiency transmission over multiple-input and single-output (MISO) channels. The energy efficiency maximization problem is non-convex due to the fractional form in its objective function. In this paper, we propose an efficient method to transform the objective function in fractional form into the difference of two concave functions (DC) form, which can be solved by the successive convex approximation (SCA) algorithm. Then we apply the proposed transformation and pricing mechanism to develop a distributed beamforming optimization for multiuser MISO interference channels, where each user solves its optimization problem independently and only limited information exchange is needed. Numerical results show the effectiveness of our proposed algorithm.
Pulse Pairs (PPs) generated by Distance Measure Equipment (DME) cause severe interference on L-band Digital Aeronautical Communication System type 1 (L-DACS1) which is based on Orthogonal Frequency Division Multiplexing (OFDM). In this paper, a novel and practical PP mitigation approach is proposed. Different from previous work, it adopts only time domain methods to mitigate interference, so it will not affect the subsequent signal processing in frequency domain. At the receiver side, the proposed approach can precisely reconstruct the deformed PPs (DPPs) which are often overlapped and have various parameters. Firstly, a filter bank and a correlation scheme are jointly used to detect non-overlapped DPPs, also a weighted average scheme is used to automatically measure the waveform of DPP. Secondly, based on the measured waveform, sparse estimation is used to estimate the precise positions of DPPs. Finally, the parameters of each DPP are estimated by a non-linear estimator. The key point of this step is, a piecewise linear model is used to approximate the non-linear carrier frequency of each DPP. Numerical simulations show that comparing with existing work, the proposed approach is more robust, closer to interference free environment and its Bit Error Rate is reduced by about 10dB.
There is a strong demand to enjoy broadband and stable Internet connectivity not only in office and the home but also in high-speed train. Several systems are providing high-speed train with Internet connectivity using various technologies such as leaky coaxial cable (LCX), Wi-Fi, and WiMAX. However, their actual throughputs are less than 2Mbps. We developed a free-space optical (FSO) communication transceiver called LaserTrainComm2014 that achieves the throughput of 1 Gbps between the ground and a train. LaserTrainComm2014 employs a high-speed image sensor for coarse tracking and a quadrant photo-diode (QPD) for accurate tracking. Since the image captured by the high-speed image sensor has several types of noise, image processing is necessary to detect the beacon light of the other LaserTrainComm2014. As a result of field experiments in a vehicle test course, LaserTrainComm2014 achieves handover time of 21 milliseconds (ms) in the link layer at the speed of 60km/h. Even if the network layer signaling takes time of 10 milliseconds, the total communication disruption time due to handover is short enough to provide passengers with Internet connectivity for live streaming Internet applications such as YouTube, Internet Radio, and Skype.
Given the rapid development of current wireless communication systems has led to two major challenges: energy conservation and interference avoidance. Addressing these challenges is critical for sustaining modern green communications. This paper proposes two energy-efficient schemes for a heterogeneous network environment. The schemes include a cell switching strategy and a power control technique. The proposed schemes can save energy while maintaining the service quality for users. Simulation results showed that compared with conventional schemes, the proposed schemes reduced energy consumption by up to 18% more and further enhanced the system energy efficiency by up to 22% without using any switch-off procedure.
In order to minimize packet error rate in extremely dynamic vehicular networks, a novel vehicle to vehicle (V2V) mobile content transmission scheme that jointly employs random network coding and shuffling/scattering techniques is proposed in this paper. The proposed scheme consists of 3 steps: Step 1-The original mobile content data consisting of several packets is encoded to generate encoded blocks using random network coding for efficient error recovery. Step 2-The encoded blocks are shuffled for averaging the error rate among the encoded blocks. Step 3-The shuffled blocks are scattered at different vehicle locations to overcome the estimation error of optimum transmission location. Applying the proposed scheme in vehicular networks can yield error free transmission with high efficiency. Our simulation results corroborate that the proposed scheme significantly improves the packet error rate performance in high mobility environments. Thanks to the flexibility of network coding, the proposed scheme can be designed as a separate module in the physical layer of various wireless access technologies.