It has been argued whether the facial expression is recognized categorically or dimensionally. We investigated this issue using three sets of morphed photographic facial images (happiness to anger, surprise, and sadness). Our interest is whether the experimental data of categorical perception can be explained by the dimensional point of view. Experiment 1 examined the facial stimuli generated by FACS. In Experiment 2, subjects were asked to discriminate some pairs of faces and to categorize the emotion shown one at a time. The discrimination accuracy between categories was better than that within each category. In Experiment 3, subjects were asked to rate those stimuli following the semantic differential technique. Then we examined the relationship between the discrimination accuracy obtained in Experiment 2 and the semantic distances obtained in Experiment 3, defined by Euclidean distances between pairs of stimuli in the factor space that was constituted two factors interpreted as “pleasantness” and “activities”. Discrimination accuracy within each category increased as a function of the semantic distances, whereas discrimination accuracy between categories remained high irrespective of the different semantic distances.