By operating on a binary black and white image with on orderded dither method, it appears, to our visual senses, to become clearer because of a pseudo halftone. This is because when we don't notice of all the details of the images, instead we see from a more global standpoint. we investigated our one-dimensional visual information processing model. We presented a theory and experimented results for a layered model which extended xeternal world, retina and brain. This paper propose our two dimensional visual infomation processing model. Applyed an aquivalent approximate method and represent an estimation of the model because of to get the restoration images of character 「犬」 and GIRL.
On conventional digital signature techniques, secret information, which is utilized for authentication, is disclosed to the verifier. In this paper, a new digital signature system for image data is proposed. This system can be used to assert the copyright of image data. In this system, a graph generated from an image which must has a signature and an isomorphic graph is concealed in this image. The ZKIP (zero knowledge interactive proof) for the graph isomorphism is applied to assert the copyright of this image, Consequently the secret information is not disclosed during the authentication process.
This paper describes a mechanism that uses a 2-level ontology of 3-D shapes to assist people in configuring 3-D objects by interpreting the meaning of high-level descriptions and translating it to a lower-level parametric representation. The 2-level representation consists of a 3-D shape ontology, a parametric representation of primitive geometric shapes, and a conceptual ontology of component relations in 3D shape domains. The paper demonstrates the effectiveness of the mechanism in providing appropriate assistance in configuring 3D objects from high-level descriptions and proves the usefulness of the mechanism through an experiment.
A new method for real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not need tape marks that are pasted to the face for detecting expressions in real-time in the current implementation for Virtual Space Teleconferencing. In the proposed method, four windows are applied to four areas in the face image : a left and right eyes, mouth and forehead. Each window is divided into blocks that consist of 8 by 8 pixels. Discrete Cosine Transform (DCT) is applied to each block, and the feature vector of each window is obtained by taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. To convert the DCT features to virtual tape mark movements, we represent the displacement of a virtual tape mark by a polynomial of the DCT features for the three directions. We apply a Genetic Algorithm to train facial expression image sequences to find the optimal set of coefficients that minimizes the difference between the real and converted displacements of the virtual tape marks. Exprimental results shows the effectiveness of the proposed method.