The performance of phase-aware speech enhancement has improved dramatically in recent years. Combined with complex convolutions, deep complex U-Net and deep complex convolution recurrent network (DCCRN) have achieved superior performance in monaural phase-aware speech enhancement. However, these methods optimize the models with loss only in the time domain and ignore the global correlations along the frequency axis that capture the harmonic information between frequency bands. Also, the algorithms based on self-attention exhibit high computational complexity. To strike the balance between performance and computational cost, we propose a new monaural phase-aware method in the time-frequency domain on the deep complex U-Net structure. Specifically, this proposed method incorporates a dual-path recurrent neural network (DPRNN) block in the bottleneck to model both frequency-domain correlation and time-domain correlation. Additionally, attention modules are implemented between the complex encoder and decoder layers. This introduces a more effective way of enhancing the representation of the model, rather than directly concatenating their outputs. Finally, a post-processing module is introduced to mitigate the over-suppression of speech and residual noise. We conduct ablation studies to validate the effectiveness of the dual-path method and the post-processing module. Also, compared to several recent speech enhancement models, the proposed algorithm demonstrates remarkable improvements in terms of objective metrics.
In this paper, a hybrid active and passive (HAP) multiple-input multiple-output (MIMO) radar network is considered, where target returns from both active radar transmitters and illuminators of opportunity (IOs) are employed to complete target detection. With consideration for the active radar power limitation and the total available number constraint for the IOs, the joint discrete power allocation and antenna selection for target detection in HAP MIMO radar is studied. A game-theoretic framework is proposed to solve the problem where the target probability of detection (PD) of the HAP MIMO radar is utilized to build a common utility. The formulated discrete game is proven to be a potential game that possesses at least one pure strategy Nash equilibrium (NE) and an optimal strategy profile that maximizes the PD of HAP radar. The properties of the formulated game, including the feasibility, existence and optimality of NE, are also analyzed. The proposed game’s pure strategy NE is determined to be an optimal scheme under certain conditions. An iterative algorithm is then designed to achieve the pure strategy NE. The designed algorithm’s convergence and complexity are discussed. It is demonstrated that the designed algorithm can achieve almost optimal target detection performance while maintaining low complexity. Under certain conditions, the designed algorithm can obtain optimal performance.
In conventional fault diagnostic methods, supervised learning-based approaches may not be applicable to practical systems because of the extensive requirements for labeled data. Moreover, conventional approaches have not adequately addressed the challenges posed by sparsely labeled and imbalanced datasets. To address these limitations, we propose a semi-supervised fault diagnostic method based on graph convolutional networks with generative adversarial networks. Distinct from conventional methods, the proposed method instructs a discriminator to extract features from labeled and unlabeled data. The discriminator is employed to construct a similarity matrix to enhance the efficacy of graph-based methods. A graph-based classifier with a discriminator can efficiently perform fault diagnosis without requiring data augmentation. The fault diagnostic methods were evaluated in terms of their classification accuracy to validate the superiority of the proposed method. The simulation results confirm that the proposed method can improve classification accuracy by up to 66% compared with conventional methods.
This work introduces the dueling dice problem, which is a variant of the multi-armed dueling bandit problem. A die is a set of m arms in this problem, and the goal is to find the best set of m arms from n arms (m ≤ n) by an iteration of dueling dice. In a round, the learner arbitrarily chooses two dice α ⊆ [n] and β ⊆ [n] and lets them duel, where she roles dice α and β, observes a pair of arms i ∈ α and j ∈ β, and receives a probabilistic result Xi,j ∈ {0,1}. This paper investigates the sample complexity of an identification of the Condorcet winner die, and gives an upper bound O(nh-2(log log h-1 + log nm2γ-1)m log m) where h is a gap parameter and γ is an error parameter. Our problem is closely related to the dueling teams problem by Cohen et al. 2021. We assume a total order of the strength over arms similarly to Cohen et al. 2021, which ensures the existence of the Condorcet winner die, but we do not assume a total order of the strength over dice unlike Cohen et al. 2021.
Direction of arrival (DOA) tracking on multiple moving targets in the far field for distributed sensor arrays is an important research direction. The results of multi-node DOA estimation are usually directly fused in the face of different signal-to-noise ratios (SNR) of each node, which will lead to deterioration of estimation performance. This paper proposes a DOA tracking algorithm for information fusion between distributed array nodes. Firstly, the unscented information filtering results, including status vectors and information matrices, are presented at each node. Then, by using the average consensus (AC) algorithm, the status vector and information matrix of each node are fused to provide a DOA fusion result, which fully considers the accuracy of each node. Theoretical analysis and numerical simulation results show that the algorithm has low computational complexity and can achieve good and robust DOA tracking performance at low SNRs.
We propose a new method to compute the joint reliability importance which is a useful index for reliability design. The key idea is to apply a special matrix to the computation of the marginal reliability importance. The computational complexity of the existing algorithm for computing the joint reliability importance in terms of each pair of components is the product of the square of the number of components and the computational complexity of computing the reliability of the system. However, we found that a reduction in order to the product of the number of components and the computational complexity of computing the reliability of the system is possible if the system can be represented by a special type of directed graph or any combinatorial model when the sum of disjoint products method is used to compute the reliability of the system.
EMV 3-D Secure is an authentication service mainly to identify and verify cardholders for card-not-present (CNP) transactions over the Internet. EMV 3-D Secure services are provided by international credit card brands such as Visa, Mastercard and American Express, and its protocol is specified by EMVCo. There are known existing works on evaluating security of several versions of 3-D Secure, such as a formal verification using Casper/FDR2 for the old specification (3-D Secure 1.0) and a spoofing attack using reverse engineering on risk assessment indicators for the current specification, EMV 3-D Secure (3-D Secure 2.0). However, there is no security verification of EMV 3-D Secure based on its protocol specification. Formal methods are known as methods that can verify security with high fidelity to the protocol specification and have been actively researched in recent years. In this paper, we verify the security of EMV 3-D Secure using ProVerif, an automated security verification tool for cryptographic protocols. First, one of the difficulties we faced is to correctly extract the detailed protocol structure from the entire specification that is written by natural language over 400 pages. Based on the extracted protocol structure, we formalize Challenge Flow for authentication by secret information under three environments (App-based (default-sdk), App-based (split-sdk), and Browser-based) in the latest version 2.3.1.1, which are specified for the purpose of identity verification in CNP transactions. We then verify the confidentiality and resistance to off-line dictionary attacks of secret information, the authenticity and the resistance to replay attacks against both man-in-the-middle attacks and colluding attacks with relay servers. As verification results, we show that Challenge Flow satisfies all of the above security requirements. Furthermore, we discuss the necessity of the unilateral authenticated channel between the cardholder and the card issuer assumed in the EMV 3-D Secure specification, and show that if we use a public channel instead of a unilateral authenticated channel, Challenge Flow still satisfies security requirements. It indicates that the protocol can be more efficient than the specification without reducing security.
If scattered X-rays carry information that is independent of that is carried by primary X-rays, the accuracy of attenuation coefficients estimated using both primary and scattered X-rays is expected to be better than that estimated using only primary X-rays. However, because scattered X-rays cannot be easily introduced into conventional X-ray computed tomography (CT), the issue has gained scant attention. This study demonstrates theoretically that the measurement of scattered X-rays improves the accuracy of reconstruction in CT, even in a photoelectric absorption scenario. Here, the CT geometry was simplified for a system that targeted a homogeneous thin cylinder, retaining the necessary configuration. Furthermore, we constructed a mathematical model termed the π-junction model. This model is an extension of the T-junction model used in one of our previous studies. It addresses the photoelectric effect, which was not considered in the T-junction model. The variance in the estimation of the attenuation coefficients of this model from the measurements of both primary and scattered photons was evaluated as the Cramer-Rao lower bound. Both the theory and numerical experiments using Monte Carlo simulation showed that the accuracy of estimating the attenuation coefficient could be improved by measuring the scattered X-rays together with the primary X-rays, even in the presence of photoelectric absorption. This result provides a basis for the superiority of using scattered X-rays.
To address the challenges of low detection accuracy resulting from occlusion and scale variation in complex traffic scenarios, as well as the high computational complexity and large model parameters associated with traditional methods, this paper proposes a Lightweight Cross-Scale Feature Fusion Algorithm. Firstly, we design the Lightweight Cross-Scale Feature Fusion Module (LCFM), which incorporates an improved internal fusion block to facilitate interactive feature fusion. This design enhances the model’s adaptability to occlusion and scale change while reducing the number of input feature channels to make the model more lightweight. Furthermore, by integrating Squeeze-and-Excitation (SE) attention with multi-branch convolution operations from the Inception structure, the model can more accurately capture multi-scale object features. Additionally, Linear Deformable Convolution (LDConv) is employed to adaptively handle shape changes through offset learning, thereby reducing computational redundancy and improving the model’s overall adaptability.