2011 Volume E94.D Issue 11 Pages 2244-2249
Let (X,Y) be a $\\mathbb{R}^d\\ imes\\mathbb{R}$-valued random vector. In regression analysis one wants to estimate the regression function $m(x):={\\bf E}(Y|X=x)$ from a data set. In this paper we consider the convergence rate of the error for the k nearest neighbor estimators in case that m is (p,C)-smooth. It is known that the minimax rate is unachievable by any k nearest neighbor estimator for p > 1.5 and d=1. We generalize this result to any d ≥ 1. Throughout this paper, we assume that the data is independent and identically distributed and as an error criterion we use the expected L2 error.