Sound source localization (SSL), with a binaural input in practical environments, is a challenging task due to the effects of noise and reverberation. In psychoacoustic research field, one of the theories to explain the mechanism of human perception in such environments is the well-known equalization-cancellation (EC) model. Motivated by the EC theory, this paper investigates a binaural SSL method by integrating EC procedures into a beamforming technique. The principle idea is that the EC procedures are first utilized to eliminate the sound signal component at each candidate direction respectively; direction of sound source is then determined as the direction at which the residual energy is minimal. The EC procedures applied in the proposed method differ from those in traditional EC models, in which the interference signals in rooms are accounted in E and C operations based on limited prior known information. Experimental results demonstrate that our proposed method outperforms the traditional SSL algorithms in the presence of noise and reverberation simultaneously.
In this paper, we propose a modified-error adaptive feedback active noise control (ANC) system using a linear prediction filter. The proposed ANC system is advantageous in terms of the rate of convergence, while maintaining stability, because it can reduce narrowband noise while suppressing disturbance, including wideband components. The estimation accuracy of the noise control filter in the conventional system is degraded because the disturbance corrupts the input signal to the noise control filter. A solution of this problem is to utilize a linear prediction filter. The linear prediction filter is utilized for the modified-error feedback ANC system to suppress the wideband disturbance because the linear prediction filter can separate narrowband and wideband noise. Suppressing wideband noise is important for the head-mounted ANC system we have already proposed for reducing the noise from a magnetic resonance imaging (MRI) device because the error microphones are located near the user's ears and the user's voice consequently corrupts the input signal to the noise control filter. Some simulation and experimental results obtained using a digital signal processor (DSP) demonstrate that the proposed feedback ANC system is superior to a conventional feedback ANC system in terms of the estimation accuracy and the rate of convergence of the noise control filter.
This paper proposes a noise reduction method for impact noise with damped oscillation caused by clinking a glass, hitting a bottle, and so on. The proposed method is based on the zero phase (ZP) signal defined as the IDFT of the spectral amplitude. When the target noise can be modeled as the sum of the impact part and the damped oscillation part, the proposed method can reduce them individually. First, the proposed method estimates the damped oscillation spectra and subtracts them from the observed spectra. Then, the impact part is reduced by replacing several samples of the ZP observed signal. Simulation results show that the proposed method improved 10dB of SNR of real impact noise.
This paper shows a method to represent interval functions by using head-tail expressions. The head-tail expressions represent greater-thanGT(X:A) functions, less-thanLT(X:B) functions, and interval functions IN0(X:A,B) more efficiently than sum-of-products expressions. Let n be the number of bits to represent the largest value in the interval (A,B). This paper proves that a head-tail expression (HT) represents an interval function with at most n words in a ternary content addressable memory (TCAM) realization. It also shows the average numbers of factors to represent interval functions by HTs for up to n=16, which were obtained by a computer simulation. It also conjectures that, for sufficiently large n, the average number of factors to represent n-variable interval functions by HTs is at most 2/3n-5/9. Experimental results also show that, for n≥10, to represent interval functions, HTs require at least 20% fewer factors than MSOPs, on the average.
Boneh et al. proposed the new idea of pairing-based cryptography by using the composite order group instead of prime order group. Recently, many cryptographic schemes using pairings of composite order group were proposed. Miller's algorithm is used to compute pairings, and the time of computing the pairings depends on the cost of calculating the Miller loop. As a method of speeding up calculations of the pairings of prime order, the number of iterations of the Miller loop can be reduced by choosing a prime order of low Hamming weight. However, it is difficult to choose a particular composite order that can speed up the pairings of composite order. Kobayashi et al. proposed an efficient algorithm for computing Miller's algorithm by using a window method, called Window Miller's algorithm. We can compute scalar multiplication of points on elliptic curves by using a window hybrid binary-ternary form (w-HBTF). In this paper, we propose a Miller's algorithm that uses w-HBTF to compute Tate pairing efficiently. This algorithm needs a precomputation both of the points on an elliptic curve and rational functions. The proposed algorithm was implemented in Java on a PC and compared with Window Miller's Algorithm in terms of the time and memory needed to make their precomputed tables. We used the supersingular elliptic curve y2=x3+x with embedding degree 2 and a composite order of size of 2048-bit. We denote w as window width. The proposed algorithm with w=6=2·3 was about 12.9% faster than Window Miller's Algorithm with w=2 although the memory size of these algorithms is the same. Moreover, the proposed algorithm with w=162=2·34 was about 12.2% faster than Window Miller's algorithm with w=7.
In recent years, ray space (or light field in other literatures) photography has become popular in the area of computer vision and image processing, and the capture of a ray space has become more significant to these practical applications. In order to handle the huge data problem in the acquisition stage, original data are compressively sampled in the first place and completely reconstructed later. In this paper, in order to achieve better reconstruction quality and faster reconstruction speed, we propose a statistically weighted model in the reconstruction of compressively sampled ray space. This model can explore the structure of ray space data in an orthogonal basis, and integrate this structure into the reconstruction of ray space. In the experiment, the proposed model can achieve much better reconstruction quality for both 2D image patch and 3D image cube cases. Especially in a relatively low sensing ratio, about 10%, the proposed method can still recover most of the low frequency components which are of more significance for representation of ray space data. Besides, the proposed method is almost as good as the state-of-art technique, dictionary learning based method, in terms of reconstruction quality, and the reconstruction speed of our method is much faster. Therefore, our proposed method achieves better trade-off between reconstruction quality and reconstruction time, and is more suitable in the practical applications.
We present a low-complexity complementary pair affine projection (CP-AP) adaptive filter which employs the intermittent update of the filter coefficients. To achieve both a fast convergence rate and a small residual error, we use a scheme combining fast and slow AP filters, while significantly reducing the computational complexity. By employing an evolutionary method which automatically determines the update intervals, the update frequencies of the two constituent filters are significantly decreased. Experimental results show that the proposed CP-AP adaptive filter has an advantage over conventional adaptive filters with a parallel structure in that it has a similar convergence performance with a substantial reduction in the total number of updates.
This note presents a new approach for the robustness of Hurwitz polynomials under coefficient perturbation. The s-domain Hurwitz polynomial is transformed to the z-domain polynomial by the bilinear transformation. Then an approach based on the Rouché theorem introduced in the literature is applied to compute a crude bound for the allowable coefficient variation such that the perturbed polynomial maintains the Hurwitz stability property. Three methods to obtain improved bounds are also suggested. The results of this note are computationally more efficient than the existing direct s-domain approaches especially for polynomials of higher degree. Furthermore examples indicate that the exact bound for the coefficient variation can be obtained in some cases.
Runtime analysis is to enhance the safety of critical systems by monitoring the change of corresponding external environments. In this paper, a modified FTA approach, making full utilization of the existing safety analysis result, is put forward to achieve runtime safety analysis. The procedures of the approach are given in detail. This approach could be widely used in safety engineering of critical systems.
In this letter, we combine minimum-shift keying (MSK) with physical-layer network coding (PNC) to form a new scheme, i.e., MSK-PNC, for two-way relay channels (TWRCs). The signal detection of the MSK-PNC scheme is investigated, and two detection methods are proposed. The first one is orthogonal demodulation and mapping (ODM), and the second one is two-state differential detection (TSDD). The error performance of the proposed MSK-PNC scheme is evaluated through simulations.
Aircraft Landing Scheduling (ALS) attempts to determine the landing time for each aircraft. The objective of ALS is to minimise the deviations of the landing time of each aircraft from its target landing time. In this paper, we propose a dynamic hyper-heuristic algorithm for the ALS problem. In our approach, the Scatter Search algorithm is chosen as the high level heuristic to build a chain of intensification and diversification priority rules, which are applied to generate the landing sequence by different priority rules, which are low level heuristics in the hyper-heuristic framework. The landing time for each aircraft can be calculated efficiently based on the landing sequence. Simulation studies demonstrate that the proposed algorithm can obtain high quality solutions for ALS.