抄録
Numerical computations of large-scale linear programming problems often contain instability owing to modelling errors and accumulation of round-off errors. So the question arises whether the effect of these errors increase or decrease with the size of problems. We show that the effect of the random errors in the original data to the optimum usually has a trend to decrease as the number of variables increases. This result is stated in terms of the law of large numbers in probability theory.