Person re-identification is the task of finding and matching the same individual in different camera views. For robust person re-identification, we propose a weighted feature integration method that adapts to illumination changes and appearance differences caused by different camera views. First, we extract four kinds of local features (color histograms, frequency features, gray-level co-occurrence matrices, and histogram of oriented gradient features) from the image of a person as cloth appearance information. Second, in the pre-training phase, we calculate the difference in value of each local feature for a pair of images taken by different cameras. Local features are then weighted and integrated based on these differences. We tested three weighting functions: reciprocal, probability density function, and average Bhattacharyya distance. In the experiments, we utilized four public datasets, iLIDS-VID, GRID, PRID, and VIPeR, and verified the effectiveness of the proposed method. The results demonstrate a general improvement in person re-identification performance when the feature integration is weighted by the average Bhattacharyya distance.