In most conventional approaches to the synthesis of human facial expressions, facial images are generated by manually moving feature points on a face based on the concept of FACS (Facial Action Coding System), primarily with 3D models, such as a wireframe model This paper describes a synthesis-by-analysis approach using range images for producing human facial 3D images with primary expressions First view-independent representations of 3D locations of facial feature points are obtained using an object-centered coordinate system defined on the face Then we quantify feature point locations for the neutral expression and six primary expressions. Applying an image warping technique on both registered range and surface texture images, we finally generate 3D facial expression images from a neutral expression image and motion vectors of facial feature points.