Nonlinear Theory and Its Applications, IEICE
Online ISSN : 2185-4106
ISSN-L : 2185-4106
Special Section on Resolving Nonlinear Problems by Python
Exploring latent spaces: A visual comparison of sentence-BERT and GPT-2 models
Masato IzumiKenya Jin'no
Author information
JOURNAL OPEN ACCESS

2025 Volume 16 Issue 2 Pages 233-249

Details
Abstract

In recent years, advancements in artificial intelligence, especially in natural language processing (NLP) models, have progressed rapidly. These models demonstrate remarkable results through training on large datasets and extensive architectures. However, the output process is often a black box, and the decision-making process remains unclear. Our research focuses on the internal representations, specifically the latent variables, generated by NLP models. In earlier work, we explored the latent variables and spaces produced by Sentence-BERT using image generation models. This approach aimed to visualize these spaces by converting discrete textual embeddings into images, introducing continuity and revealing novel relationships. This paper presents the development of an image generation model that uses a common decoder for both GPT-2 and Sentence-BERT, aimed at examining how differences in model architecture affect their latent spaces. We also investigate the impact of training dataset differences by comparing models trained in English and Japanese. Our findings indicate that while the models often generate similar outputs, significant differences emerge in sentences containing multiple elements, attributable to the differing focuses and objectives of the models. Our goal is to understand these latent spaces and contribute to the development of explainable AI.

Content from these authors
© 2025 The Institute of Electronics, Information and Communication Engineers

This article is licensed under a Creative Commons [Attribution-NonCommercial-NoDerivatives 4.0 International] license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
Previous article Next article
feedback
Top