2025 Volume 40 Issue 3 Pages A-O65_1-23
We propose a method to extract bigram knowledge from GPT-2 models. Based on the observation that the first layer in GPT-2 is useful to predict the tokens next to the given input tokens, we propose an algorithm to use self attention heads only from the first layer to predict the next tokens. We also propose an algorithm to find contextual words that are highly related to a given bigram by applying the backpropagation method to GPT-2 parameters for the next-token prediction. Experimental results showed that our proposed algorithms to predict next words and to induce context words showed the higher average precision values than the baseline methods.