Character recognition from video camera images is necessary, because digital video cameras have gained in popularity. In this paper, we propose a new method of extracting four directional features. The characteristic of our method is to use dynamic scenes. This is achieved by using images shifted by half a pixel in the low resolution. We experimented with character recognition using the common database ETL2. We calculated the center of gravity of the camera image in order to apply our method in practice. We also experimented with character recognition using images taken by a video camera. The recognition results show that our method is effective at low resolution.
Although there are several studies on techniques for producing painting-like images from photographs, almost none of them consider the sketching process. Because we think that the sketching process is rather important, we experimentally observed professional artists sketching. From the results, we extracted relationships between the types of sketches and the features that can they watch. We called these features expressional features. Based of these expressional features, we present an image simulator produce several types of expressions.
A presentation system with one 100 inch and two 60 inch TV monitors has been constructed for lectures. This system can take as its input printed matter, video, LD, CD, or pictures created on a personal computer. Materials such as textbooks, reference data, and educational videos are used for the experimental lectures. Questionnaires given to the students attending the experimental lectures revealed that most of the students considered the educational videos and reference data useful for understanding the lectures. However, principal component analysis of the evaluated data shows that the effect of the videos is independent of student' understanding of the entire lectures.
We previously proposed a method of automatically constructing image transformations which consist of a sequence of several given image filters. They are optimized to approximate the transformation from an original image to its target one using a genetic algorithm where the target image is an ideally processed one made manually. In this paper, we propose an extended method ACTIT. We put image filters having several inputs together into tree-structures.The leaf nodes of a tree are the original images and other nodes are the image filters. And the output of the root node means the output of a whole image transformation. To get good output images, the tree is optimized by genetic programming. In this way, we construct complex image transformations which cannot be constructed by a sequence of image filters. We applied ACTIT to medical image processing and proved that it is useful for automating the construction of image transformations.
We have developed a monolithic 512×512-element GeSi/Si heterojunction infrared image sensor. Its operating mechanism is the same as that of the PtSi/Si Schottky-barrier detector. We fabricated the GeSi/Si heterojunction by molecular beam epitaxy and confirmed that ideal strained GeSi films were grown on Si substrates. We evaluated the dependencies of the spectral responsivity on the Ge composition, impurity concentration, and GeSi thickness, and optimized them for 8-12 um infrared detection. The sensor has a pixel size of 34 × 34 um2 and a fill factor of 59%. A low noise equivalent temperature difference of 0.08 K (f/2.0) was obtained at a background of 300 K with a very small responsivity dispersion of 2.2%.