2025 Volume 29 Issue 4 Pages 868-879
Deep learning has achieved significant advancements in natural language processing. However, applying these methods to languages with complex morphological and syntactic structures—such as Russian—remains challenging. To address these challenges, this paper presents an optimized sentiment analysis model, GNN–BERT–AE, specifically designed for the Russian language. The model integrates graph neural networks (GNNs) with the contextualized embeddings of bidirectional encoder representations from transformers (BERT), enabling it to capture both syntactic dependencies and nuanced semantic information inherent in the Russian language. Whereas GNN excels in modeling the intricate word dependencies within the language, the contextualized representations of BERT provide a deep understanding of the text, improving the ability of the model to accurately interpret sentiments. The model further incorporates traditional feature extraction techniques—bag of words and term frequency–inverse document frequency—to preprocess text and emphasize critical features for sentiment analysis. To further enhance these features, a self-encoder clustering algorithm is employed, enabling the identification of latent patterns and improving the sensitivity of the model to subtle sentiment variations. The final phase of the model involves sentiment classification, categorizing emotions based on the enriched feature set. Experimental results showed that the GNN–BERT–AE model outperformed existing models—CNN–Transformer, RNN–LSTM–GRU, and Text–BiLSTM–CNN—on Russian social media datasets, achieving 1.25% to 3.1% accuracy improvements. These results highlight the robustness of the model and its significant potential for advancing sentiment analysis in the Russian language, particularly in handling complex linguistic features.
This article cannot obtain the latest cited-by information.