Machine learning techniques have realized some principal cognitive functionalities such as nonlinear generalization and causal model construction, as far as huge amount of data are available. A next frontier for cognitive modelling would be the ability of humans to transfer past knowledge to novel, ongoing experience, making analogies from the known to the unknown. Novel metaphor comprehension may be considered as an example of such transfer learning and analogical reasoning that can be empirically tested in a relatively straightforward way. Based on some concepts inherent in category theory, we implement a model of metaphor comprehension called the theory of indeterminate natural transformation (TINT), and test its descriptive validity of humans' metaphor comprehension. We simulate metaphor comprehension with two models: one being structure-ignoring, and the other being structure-respecting. The former is a sub-TINT model, while the latter is the minimal-TINT model. As the required input to the TINT models, we gathered the association data from human participants to construct the “latent category” for TINT, which is a complete weighted directed graph. To test the validity of metaphor comprehension by the TINT models, we conducted an experiment that examines how humans comprehend a metaphor. While the sub-TINT does not show any significant correlation, the minimal-TINT shows significant correlations with the human data. It suggests that we can capture metaphor comprehension processes in a quite bottom-up manner realized by TINT.