In a world of continuously expanding amounts of data, retrieving interesting information from enormous data sets becomes more complex every day. Solutions for precomputing views on these big data sets mostly follow either an offline approach, which is slow but can take into account the entire data set, or a streaming approach, which is fast but only relies on the latest data entries. A hybrid solution was introduced through the Lambda architecture concept. It combines both offline and streaming approaches by analyzing data in a fast speed layer first, and in a slower batch layer later. However, this introduces a new synchronization challenge: once the data is analyzed by the batch layer, the corresponding information needs to be removed in the speed layer without introducing redundancy or loss of data. In this paper we propose a new approach to implement the Lambda architecture concept independent of the technologies used for offline and stream computing. A universal solution is provided to manage the complex synchronization introduced by the Lambda architecture and techniques to provide fault tolerance. The proposed solution is evaluated by means of detailed experimental results.
In this paper, we propose a novel architecture for a deep learning system, named k-degree layer-wise network, to realize efficient geo-distributed computing between Cloud and Internet of Things (IoT). The geo-distributed computing extends Cloud to the geographical verge of the network in the neighbor of IoT. The basic ideas of the proposal include a k-degree constraint and a layer-wise constraint. The k-degree constraint is defined such that the degree of each vertex on the h-th layer is exactly k(h) to extend the existing deep belief networks and control the communication cost. The layer-wise constraint is defined such that the layer-wise degrees are monotonically decreasing in positive direction to gradually reduce the dimension of data. We prove the k-degree layer-wise network is sparse, while a typical deep neural network is dense. In an evaluation on the M-distributed MNIST database, the proposal is superior to a state-of-the-art model in terms of communication cost and learning time with scalability.
Over the last few years, Apache MapReduce has become the prevailing framework for large scale data processing. Instead of writing MapReduce programs which are too obscure to express, many developers usually adopt high level query languages, such as Hive or Pig Latin, to finish their complex queries. These languages automatically compile each query into a workflow of MapReduce jobs, so they greatly facilitate the querying and management of large datasets. One option to speed up the execution of workflows is to save the results produced previously and reuse them in the future if needed. In this paper we present SuperRack, which uses shared storage devices to store the results of each workflow and allows a new query to reuse these results in order to avoid redundant computation and hasten execution. We propose several novel techniques to improve the access and storage efficiency of the previous results. We also evaluate SuperRack to verify its feasibility and effectiveness. Experiments show that our solution outperforms Hive significantly under TPC-H benchmark and real life workloads.
3GPP Long Term Evolution (LTE) is one of the most advanced technologies in the wireless and mobility field because it provides high speed data and sophisticated applications. LTE was originally deployed by service providers on various platforms using separate dedicated hardware in Access radio layer and the Evolved Packet Core network layer (EPC), thereby limiting the system's flexibility and capacity provisioning. Thus, the concept of virtualization was introduced in the EPC hardware to solve the dedicated hardware platform limitations. It was also introduced in the IP Multimedia Subsystem (IMS) and Machine to Machine applications (M2M) for the same reason. This paper provides a simulation model of a virtualized EPC and virtualized M2M transport application server connected via an external IP network, which has significant importance in the future of mobile networks. This model studies the virtualized server connectivity problem, where two separate virtual machines communicate via the existing external legacy IP network. The simulation results show moderate performance, indicating that the selection of IP technology is much more critical than before. The paper also models MPLS technology as a replacement for the external IP routing mechanism to provide traffic engineering and achieve more efficient network performance. Furthermore, to provide a real network environment, Poisson Pareto Burst Process (PPBP) traffic source is carried over the UDP transport layer which matches the statistical properties of real-life M2M traffic. Furthermore, the paper proves End-to-End interoperability of LTE and MPLS running GTP and MPLS Label Forwarding information Base (LFIB) and MPLS traffic engineering respectively. Finally, it looks at the simulation of several scenarios using Network Simulator 3 (NS-3) to evaluate the performance improvement over the traditional LTE IP architecture under M2M traffic load.
In modern communication systems, it is a critical and challenging issue for existing carrier tracking techniques to achieve near-ideal carrier synchronization without the help of pilot signals in the case of symbol rate sampling and low signal-to-noise ratio (SNR). To overcome this issue, this paper proposes an effective carrier frequency and phase offset tracking scheme which has a robust confluent synchronization architecture whose main components are a digital frequency-locked loop (FLL), a digital phase-locked loop (PLL), a modified symbol hard decision block and some sampling rate conversion blocks. As received signals are sampled at symbol baud rate, this carrier tracking scheme is still able to obtain precise estimated values of carrier synchronization parameters under the condition of very low SNRs. The performance of the proposed carrier synchronization scheme is also evaluated by using Monte-Carlo method. Simulation results confirm the feasibility of this carrier tracking scheme and demonstrate that it ensures that both the rate-3/4 irregular low-density parity-code (LDPC) coded system and the military voice transmission system utilizing the direct sequence spread spectrum (DSSS) technique achieve satisfactory bit-error rate (BER) performance at correspondingly low SNRs.
In this paper, we propose a workload assignment policy for reducing power consumption by air conditioners in data centers. In the proposed policy, to reduce the air conditioner power consumption by raising the temperature set points of the air conditioners, the temperatures of all server back-planes are equalized by moving workload from the servers with the highest temperatures to the servers with the lowest temperatures. To evaluate the proposed policy, we use a computational fluid dynamics simulator for obtaining airflow and air temperature in data centers, and an air conditioner model based on experimental results from actual data center. Through evaluation, we show that the air conditioners' power consumption is reduced by 10.4% in a conventional data center. In addition, in a tandem data center proposed in our research group, the air conditioners' power consumption is reduced by 53%, and the total power consumption of the whole data center is exhibited to be reduced by 23% by reusing the exhaust heat from the servers.
Small loop gain and low crossover frequency result in poor dynamic performance of a single-loop output voltage controlled boost converter in continuous conduction mode. Multi-loop current control can improve the dynamic performance, however, the cost, size and weight of the circuit will also be increased. Sensorless multi-loop control solves the problems, however, the difficulty of the closed-loop characteristics evaluation will be severely aggravated, because there are more parameters in the loops, meanwhile, different from the single-loop, the relationships between the loop gains and closed-loop characteristics including audio susceptibility and output impedance are generally indirect for the multi-loop. Therefore, in this paper, a novel robust H∞ synthesis approach in the time-domain is proposed to design a sensorless controller for boost converters, which need not solve any algebraic Riccati equation or linear matrix inequalities, and most importantly, provides an approach to parameterizing the controller by an adjustable parameter. The adjustable parameter behaves like a ‘knob’ on the dynamic performance, consequently, which makes the closed-loop characteristics evaluation straightforward. A boost converter is used to verify the proposed synthesis approach. Simulations show the great convenience of the closed-loop characteristics evaluation. Practical experiments confirm the simulations.
It is known that in the selected mapping (SLM) scheme for orthogonal frequency division multiplexing (OFDM), correlation (CORR) metric outperforms the peak-to-average power ratio (PAPR) metric in terms of bit error rate (BER) performance. It is also well known that four times oversampling is used for estimating the PAPR performance of continuous OFDM signal. In this paper, the oversampling effect of OFDM signal is analyzed when CORR metric is used for the SLM scheme in the presence of nonlinear high power amplifier. An analysis based on the correlation coefficients of the oversampled OFDM signals shows that CORR metric of two times oversampling in the SLM scheme is good enough to achieve the same BER performance as four times and 16 times oversampling cases. Simulation results confirm that for the SLM scheme using CORR metric, the BER performance for two times oversampling case is almost the same as that for four and 16 times oversampling cases.
Past disasters, e.g., mega-quakes, tsunamis, have taught us that it is difficult to fully repair heavily damaged network systems in a short time. The only method for quickly restoring core communications is to start by fully utilizing the surviving network resources from different networks. However, as these networks might be built using different vendors' products (which are often incompatible with each other), the interconnection and utilization of these surviving resources are not straightforward. In this paper, we consider an all-optical multi-vendor interconnection method as an efficient reactive approach during disaster recovery. First, we introduce a disaster recovery scenario in which we use the multi-vendor interconnection approach. Second, we present two sub-problems and propose solutions: (1) network planning problem for multi-vendor interconnection-based emergency optical network construction and (2) interconnection problem for multi-vendor optical networks including both the data-plane and the control-and-management-plane. To enable the operation of multi-vendor systems, command translation middleware is developed for individual vendor-specific network control-and-management systems. Simulations are conducted to evaluate our proposal for sub-problem (1). The results reveal that multi-vendor interconnection can lead to minimum-cost network recovery. Additionally, an emergency optical network prototype is implemented on a two-vendor optical network test-bed to address sub-problem (2). Demonstrations of both the data-plane and the control-and-management-plane validate the feasibility of the multi-vendor interconnection approach in disaster recovery.
Accidental falling among elderly people has become a public health concern. Thus, there is a need for systems that detect a fall when it happens. This paper presents a portable real-time remote health monitoring system that can remotely monitor patients' movements. The system is designed and implemented using ZigBee wireless technologies, and the data is analysed using Matlab. The purpose of this research is to determine the acceleration thresholds for fall detection, using tri-axial accelerometer readings at the head, waist, and knee. Seven voluntary subjects performed purposeful falls and Activities of Daily Living (ADL). The results indicated that measurements from the waist and head can accurately detect falls; the sensitivity and reliability measurements of fall detection ranged between 80% and 90%. In contrast, the measurements showed that the knee is not a useful position for the fall detection.
Wi-Fi P2P has been deployed extensively in mobile devices. However, Wi-Fi P2P is not efficient because it requires an IP layer connection for transmitting even short messages to nearby devices, especially in high density or highly mobile environments owing to the fact that a user on the move has difficulty selecting service-available devices, and a user device has to frequently connect to and be released from nearby devices. This paper proposes a new messaging framework that enables application-level messages to be exchanged between nearby devices with no IP layer connectivity over Wi-Fi P2P. The pre-association messaging framework (PAMF) supports both broadcast and unicast transmission to maximize the delivery success rate, considering the number of peers and messages. Evaluations of PAMF conducted under real scenarios show that application-level messages can be exchanged within a few seconds, with high success rate. PAMF provides high portability and extensibility because it does not breach the Wi-Fi P2P standard. Moreover, the demonstrations show that PAMF is practical for new proximity services such as local marketing and urgent messaging.
Computer networks require sophisticated control mechanisms to realize fair resource allocation among users in conjunction with efficient resource usage. To successfully realize fair resource allocation in a network, someone should control the behavior of each user by considering fairness. To provide efficient resource utilization, someone should control the behavior of all users by considering efficiency. To realize both control goals with different granularities at the same time, a hierarchical network control mechanism that combines microscopic control (i.e., fairness control) and macroscopic control (i.e., efficiency control) is required. In previous works, Aida proposed the concept of chaos-based hierarchical network control. Next, as an application of the chaos-based concept, Aida designed a fundamental framework of hierarchical transmission rate control based on the chaos of coupled relaxation oscillators. To clarify the realization of the chaos-based concept, one should specify the chaos-based hierarchical transmission rate control in enough detail to work in an actual network, and confirm that it works as intended. In this study, we implement the chaos-based hierarchical transmission rate control in a popular network simulator, ns-2, and confirm its operation through our experimentation. Results verify that the chaos-based concept can be successfully realized in TCP/IP networks.
Predicting the routing paths between any given pair of Autonomous Systems (ASes) is very useful in network diagnosis, traffic engineering, and protocol analysis. Existing methods address this problem by resolving the best path with a snapshot of BGP (Border Gateway Protocol) routing tables. However, due to route deficiencies, routing policy changes, and other causes, the best path changes over time. Consequently, existing methods for path prediction fail to capture route dynamics. To predict AS-level paths in dynamic scenarios (e.g. network failures), we propose a per-neighbor path ranking model based on how long the paths have been used, and apply this routing model to extract each AS's route choice configurations for the paths observed in BGP data. With route choice configurations to multiple paths, we are able to predict the path in case of multiple network scenarios. We further build the model with strict policies to ensure our model's routing convergence; formally prove that it converges; and discuss the path prediction capturing routing dynamics by disabling links. By evaluating the consistency between our model's routing and the actually observed paths, we show that our model outperforms the state-of-the-art work .
This paper proposes the use of two transmit and two receive antennas spaced at roughly the width of a human body to improve communication quality in the presence of shadowing by a human body in the 60GHz band. In the proposed method, the transmit power is divided between the two transmit antennas, and the receive antenna that provides the maximum receive level is then chosen. Although the receive level is reduced by 3dB, the maximum attenuation caused by human body shadowing is totally suppressed. The relationship between the antenna element spacing and the theoretical spacing based on the 1st. Fresnel zone theory is clarified. Experiments confirm that antenna spacing several centimeters wider than that given by the 1st. Fresnel zone theory is enough to attain a significant performance improvement.
A novel circularly and linearly polarized loop antenna is presented. A simple loop configuration, twisted like a cross shape, has achieved radiating wide beam circular polarization simultaneously with linear polarization in two close bands. This cross configuration brings good circular polarization to a loop antenna because it uses the transmission line mode of a folded dipole antenna. For these reasons, the antenna is named the Cross Spiral Antenna (CSA). In this paper, a basic structure and the principle of the CSA radiating circular polarization with one port feeding is explained. The prototype CSA, which is tuned to around 1.57GHz and 1.6GHz, is tested for verifying the effectiveness of the suggested antenna configuration.
In this paper, we propose an efficient regularized zero-forcing (RZF) precoding method that has lower hardware resource requirements and produces a shorter delay to the first transmitted symbol compared with truncated polynomial expansion (TPE) that is based on Neumann series in massive multiple-input multiple-output (MIMO) systems. The proposed precoding scheme, named matrix decomposition-polynomial expansion (MDPE), essentially applies a matrix decomposition algorithm based on polynomial expansion to significantly reduce full matrix multiplication computational complexity. Accordingly, it is suitable for real-time hardware implementations and high-mobility scenarios. Furthermore, the proposed method provides a simple expression that links the optimization coefficients to the ratio of BS/UTs antennas (β). This approach can speed-up the convergence to the matrix inverse by a matrix polynomial with small terms and further reduce computation costs. Simulation results show that the MDPE scheme can rapidly approximate the performance of the full precision RZF and optimal TPE algorithm, while adaptively selecting matrix polynomial terms in accordance with the different β and SNR situations. It thereby obtains a high average achievable rate of the UTs under power allocation.
This paper presents a low complexity metric for joint maximum-likelihood detection (MLD) in overloaded multiple-input multiple-output (MIMO)-orthogonal frequency division multiplexing (OFDM) systems. In overloaded MIMO systems, a nonlinear detection scheme such as MLD combined with error correction coding achieves better performance than is possible with a single signal stream with higher order modulation. However, MLD incurs high computation complexity because of the multiplications in the selection of candidate signal points. Thus, a Manhattan metric has been used to reduce the complexity. Nevertheless, it is not accurate and causes performance degradation in overloaded MIMO systems. Thus, this paper proposes a new metric whose calculations involve only summations and bit shifts. New numerical results obtained through computer simulation show that the proposed metric improves bit error rate (BER) performance by more than 0.2dB at the BER of 10-4 in comparison with a Manhattan metric.
In this paper, we propose a novel censor-based cooperative spectrum sensing strategy, called adaptive energy-efficient sensing (AES), in which both sequential sensing and censoring report mechanism are employed, aiming to reduce the sensing energy consumption of secondary user relays (SRs). In AES, an anchor secondary user (SU) requires cooperative sensing only when it does not detect the presence of PU by itself, and the cooperative SR adopts decision censoring report only if the sensing result differs from its previous one. We derive the generalized-form expressions false alarm and detection probabilities over Rayleigh fading channels for AES. The sensing energy consumption is also analyzed. Then, we study sensing energy overhead minimization problem and show that the sensing time allocation can be optimized to minimize the miss detection probability and sensing energy overhead. Finally, numerical results show that the proposed strategy can remarkably reduce the sensing energy consumption while only slightly degrading the detection performance compared with traditional scheme.
Tag collision has a negative impact on the performance of RFID systems. In this letter, we propose an algorithm termed anti-collision protocol based on improved collision detection (ACP-ICD). In this protocol, dual prefixes matching and collision bit detection technique are employed to reduce the number of queries and promptly identify tags. According to the dual prefixes matching method and collision bit detection in the process of collision arbitration, idle slots are eliminated. Moreover, the reader makes full use of collision to improve identification efficiency. Both analytical and simulation results are presented to show that the performance of ACP-ICD outperforms existing anti-collision algorithms.
Nonlinear precoding improves the downlink bit error rate (BER) performance of multi-user multiple-input multiple-output (MU-MIMO). Broadband single-carrier (SC) block transmission can improve the capability that nonlinear precoding reduces BER, as it provides frequency diversity gain. This paper considers Tomlinson-Harashima precoding (THP) as a nonlinear precoding scheme for SC-MU-MIMO downlink. In the SC-MU-MIMO downlink with frequency-domain THP proposed by Degen and Rrühl (called SC-FDTHP), the inter-symbol interference (ISI) is suppressed by transmit frequency-domain equalization (FDE) after suppressing the inter-user interference (IUI) by frequency-domain THP. Transmit FDE increases the signal variance, hence transmission performance improvement is limited. In this paper, we propose a new SC-MU-MIMO downlink with time-domain THP which can pre-remove both ISI and IUI (called SC-TDTHP) if perfect channel state information (CSI) is available. Modulo operation in THP suppresses the signal variance increase caused by ISI and IUI pre-removal, and hence the transmission quality improves. For further performance improvement, vector perturbation is introduced to SC-TDTHP (called SC-TDTHP w/VP). Computer simulation shows that SC-TDTHP achieves better BER performance than SC-FDTHP and that SC-TDTHP w/VP offers further improvement in BER performance over SC-MU-MIMO with VP (called SC-VP). Computational complexity is also compared and it is showed that SC-TDTHP and SC-TDTHP w/VP incur higher computational complexity than SC-FDTHP but lower than SC-VP.
In this paper, an interference rejection combining (IRC) technique is proposed for SFBC-OFDM cellular systems that exhibit multiple carrier frequency offsets (CFOs). The IRC weight and the corresponding value for CFO compensation in the proposed technique are obtained by maximizing the post-SINR, i.e., minimizing both the interference signal and inter-channel interference (ICI) terms caused by multiple CFOs. The performance of the conventional IRC and proposed IRC techniques is evaluated by computer simulation for an SFBC-OFDM cellular system with multiple CFOs.
In our previous paper, we presented a concept of “Baseband Radio” as an ideal of future wireless communication scheme. Furthermore, for enhancing the adaptability of baseband radio, the adaptive baseband radio was discussed as the ultimate communication system; it integrates the functions of cognitive radio and software-defined radio. In this paper, two transmission schemes that take advantage of adaptive baseband radio are introduced and the results of a performance evaluation are presented. The first one is a scheme based on DSFBC for realizing higher reliability; it allows the flexible use of frequency bands over a wide range of white space. The second one is a low-power-density communication scheme with spectrum-spreading by means of frequency-domain differential coding so that the secondary system does not seriously interfere with primary-user systems that have been assigned the same frequency band.
Cognitive radio (CR) is an important technology to provide high-efficiency data communication for the IoT (Internet of Things) era. Signal detection is a key technology of CR to detect communication opportunities. Energy detection (ED) is a signal detection method that does not have high computational complexity. It, however, can only estimate the presence or absence of signal(s) in the observed band. Cyclostationarity detection (CS) is an alternative signal detection method. This method detects some signal features like periodicity. It can estimate the symbol rate of a signal if present. It, however, incurs high computational complexity. In addition, it cannot estimate the symbol rate precisely in the case of single carrier signal with a low Roll-Off factor (ROF). This paper proposes a method to estimate coarsely a signal's bandwidth and carrier frequency from its power spectrum with lower computational complexity than the CS. The proposed method can estimate the bandwidth and carrier frequency of even a low ROF signal. This paper evaluates the proposed method's performance by numerical simulations. The numerical results show that in all cases the proposed coarse bandwidth and carrier frequency estimation is almost comparable to the performance of CS with lower computational complexity and even outperforms in the case of single carrier signal with a low ROF. The proposed method is generally effective for unidentified classification of the signal i.e. single carrier, OFDM etc.
We have proposed a quality of experience (QoE)-oriented wireless local area network (WLAN) to provide sufficient QoE to important application flows. Unlike ordinary IEEE 802.11 WLAN, the proposed QoE-oriented WLAN dynamically performs admission control with the aid of the prediction of a “loadable capacity” criterion. This paper proposes an algorithm for dynamic network reconfiguration by centralized control among multiple basic service sets (BSSs) of the QoE-oriented WLAN, in order to maximize the number of traffic flows whose QoE requirements can be satisfied. With the proposed dynamic reconfiguration mechanism, stations (STAs) can change access point (AP) to connect. The operating frequency channel of a BSS also can be changed. These controls are performed according to the current channel occupancy rate of each BSS and the required radio resources to satisfy the QoE requirement of the traffic flow that is not allowed to transmit its data by the admission control. The effectiveness of the proposed dynamic network reconfiguration is evaluated through indoor experiments with assuming two cases. One is a 14-node experiment with QoE-oriented WLAN only, and the other is a 50-node experiment where the ordinary IEEE 802.11 WLAN and the QoE-oriented WLAN coexist. The experiment confirms that the QoE-oriented WLAN can significantly increase the number of traffic flows that satisfy their QoE requirements, total utility of network, and QoE-satisfied throughput, which is the system throughput contributing to satisfy the QoE requirement of traffic flows. It is also revealed that the QoE-oriented WLAN can protect the traffic flows in the ordinary WLAN if the border of the loadable capacity is properly set even in the environment where the hidden terminal problem occurs.
This paper investigates a signal area (SA) estimation method for wideband and long time duration spectrum measurements for dynamic spectrum access. SA denotes the area (in time/frequency domain) occupied by the primary user's signal. The traditional approach, which utilizes only Fourier transform (FT) and energy detector (ED) for SA estimation, can achieve low complexity, but its estimation performance is not very high. Against this issue, we apply post-processing to improve the performance of the FT-based ED. Our proposed method, simple SA (S-SA) estimation, exploits the correlation of the spectrum states among the neighboring tiles and the fact that SA typically has a rectangular shape to estimate SA with high accuracy and relatively low complexity compared to a conventional method, contour tracing SA (CT-SA) estimation. Numerical results will show that the S-SA estimation method can achieve better detection performance. The SA estimation and processing can reduce the number of bits needed to store/transmit the observed information compared to the FT-based ED. Thus, in addition to improved detection performance it also compresses the data.
Ranging is commonly used to measure the distance to a satellite, since it is one of the quickest and most effective methods of finding the position of a satellite. In general, ranging ambiguity is easily resolved using major and subsequent ambiguity-resolving tones. However, an induced unknown phase error could interfere with resolving the ranging ambiguity. This paper suggests an effective and practical method to resolve the ranging ambiguity without changing the original planned ranging tone frequencies when an unknown non-linear phase error exists. Specifically, the present study derives simple equations for finding the phase error from the physical relationship between the measured major and minor tones. Furthermore, a technique to select the optimal ambiguity integer and correct phase error is provided. A numerical analysis is performed using real measurements from a low earth orbit (LEO) satellite to show its suitability and effectiveness. It can be seen that a non-ambiguous range is acquired after compensating the unknown phase error.
When an access point transmits multi-view video over a wireless network with subcarriers, bit errors occur in the low quality subcarriers. The errors cause a significant degradation of video quality. The present paper proposes Significance based Multi-view Video Streaming with Subcarrier Allocation (SMVS/SA) for the maintenance of high video quality. SMVS/SA transmits a significant video frame over a high quality subcarrier to minimize the effect of the errors. SMVS/SA has two contributions. The first contribution is subcarrier-gain based multi-view rate distortion to predict each frame's significance based on the quality of subcarriers. The second contribution is heuristic algorithms to decide the sub-optimal allocation between video frames and subcarriers. The heuristic algorithms exploit the feature of multi-view video coding, which is a video frame is encoded using the previous time or camera video frame, and decides the sub-optimal allocation with low computation. To evaluate the performance of SMVS/SA in a real wireless network, we measure the quality of subcarriers using a software radio. Evaluations using MERL's benchmark test sequences and the measured subcarrier quality reveal that SMVS/SA achieves low traffic and communication delay with a slight degradation of video quality. For example, SMVS/SA improves video quality by up to 2.7 [dB] compared to the multi-view video transmission scheme without subcarrier allocation.