This paper describes current work in visual search within a framework derived from the feature integration theory (FIT) of attention proposed by Treisman and her colleagues. Following topics are reviewed; parallel processing for feature conjunction, distractor homogeneity, display element similarity, multiple conjunction, multiple targets, task-dependent search, eye movements and learning effect. Such work is central to the study of attention and currently among the most active areas. These studies raise some critical questions about FIT; the nature of a feature map and the dichotomy of parallel and serial processing. Three models of visual search, that is, SERR, guided search and multiresolutional models, are compared each other. Their capabilities are evaluated with critical phenomena of search asymmetry, negative search functions, and so on. Finally, we discuss how these models can be linked.