We propose a deep-learning-based denoising model optimized for Japanese seismic waveform data. Unlike the conventional DeepDenoiser [1], our model uses MeSO-net observations, which contain strong anthropogenic and urban noise, and adopts a U-Net architecture with a Convolutional Block Attention Module (CBAM) [4] for enhanced feature extraction. The input consists of a two-channel spectrogram representing the real and imaginary parts of the complex STFT. The target is an amplitude ratio mask derived from the magnitude spectra of noisy and clean signals. The loss function combines mean squared error (MSE) with signal-to-noise ratio (SNR) and cross-correlation (CC) terms to preserve waveform similarity. The model converged after 27 epochs and achieved an evaluation score of 299.71, far exceeding DeepDenoiser (15.19). Average SNRs reached ~170, and CC values exceeded 0.9 across all components. These results demonstrate that incorporating SNR and CC terms improves denoising performance while maintaining signal fidelity for Japanese seismic data.
I propose a novel loss function, Spike-Aware Weighted MSE (SAW-MSE), which emphasizes prediction accuracy during geomagnetic storm periods by adaptively weighting errors. While traditional LSTM models using mean squared error (MSE) struggle with limitations in capturing extreme Dst variations, our storm-weighted MSE incorporates a dynamic weighting mechanism governed by parameters α and β, where α controls the degree of emphasis on severe geomagnetic storms, and β determines the steepness of the penalty increase in response to more negative Dst value. The experimental result demonstrates that the proposed method improves accuracy during intense storm events, reducing RMSE from 57.88 (LSTM with MSE) to 45.94(LSTM with SAW-MSE). This result suggests that domain-specific loss function, SAW-MSE, can effectively enhance robustness in space weather forecasting.
A deep learning model to predict the geomagnetic storm index (Dst index) 24 hours in advance using solar wind data was proposed. The model employs a two-step prediction process: the preliminary prediction is performed using a Transformer-based model, and the second prediction utilizes multiple models selected according to the preliminary prediction values. The optimal combination of models was explored to minimize RMSE for the contest, resulting in a best RMSE of 28.381 on the test data.