Parametric inference based on empirically estimable divergence is becoming popular. This paper focuses on the divergence with affine invariance as well as empirical estimability. Under some assumptions, we can show that the divergence with these two properties is essentially equivalent to the Hölder divergence. We discuss an intersection between Hölder divergence and well-known Bregman divergence, which becomes a simple divergence with only two parameters. We also consider the parametric inference based on Hölder divergence using an enlarged model. This idea enables us to estimate the outlier ratio as well as the model parameter. In addition, the resulting parameter estimate is based on the γ-divergence, so that the parameter estimate has various useful properties, because the γ-divergence is known as a very useful divergence in robust statistics.
Finding the unbiasedness or biasedness of test statistics is important in testing a hypothesis. The unbiasedness or biasedness of the generalized Wilcoxon rank-sum test and the Jonckheere-Terpstra-type test are investigated. Deriving the exact critical value of the test statistic can be difficult when the sample sizes are increasing. In this situation, an approximation method to the distribution function of the test statistic can be useful with a higher order moment. We derive the expressions for the moment generating function of a linear rank test and a combination of two linear rank statistics. The accuracy of various approximations to the probability of various statistics are investigated. The normal approximation, the Edgeworth expansion, the saddlepoint approximation and the moment-based approximation with an adjusted specific distributional polynomial are used to evaluate the upper tail probability for various nonparametric tests.