抄録
Intent detection and slot filling are essential tasks in natural language understanding (NLU) and natural language processing (NLP). They are commonly used in dialogue systems, chatbots, virtual assistants, and various applications involving human-computer interaction. Previously, traditional models could only perform one of these tasks at a time. However, recent advancements in natural language processing have allowed the combination of both tasks in a single model, resulting in significantly improved results compared to previous models. The development of the Bidirectional
Encoder Representation from Transformer (BERT) model has played a significant role in enhancing the performance
of these tasks. This is due to the Attention mechanism utilized by the BERT model. In JoinIDSF architecture, the
authors used Attention mechanism additionally to explicitly incorporate intent context information into slot filling named the “intent-slot attention” layer. Based on that, to enhance efficiency on the slot filling task, we suggest a new architecture using a combination of BERT,CNN (Convolutional Neural Network), and “intent-slot attention” layer. The outcomes of our proposed architecture have demonstrated that it achieves state-ofthe-art results on the slot filling task, with a high F1 score. Furthermore, our approach has significantly improved the accuracy of the sentence-level semantic frame when tested on publicly available benchmark datasets such as phoATIS. Our model increased (0.02–0.2%) and (0.02–0.37%) for slot filling F1 score and semantic accuracy respectively. Additionally, our proposed architecture also showed a slight improvement in the intent detection task.