Abstract
Proposed in this paper is a novel approach to inverse optimization problems by the learning of neural networks. Inverse optimization here means to estimate a diagonal positive semidefinite quadratic criterion function which optimizes a given solution under predetermined constraint conditions. This task is reduced to obtaining gradient vectors which lie in the intersection of two polar cones: one determined by the Kuhn-Tucker condition and the other by the positive semidefinite condition. A new network architecture for inverse optimization problems is proposed to simultaneously satisfy the two conditions. It is a special purpose network architecture with inequality constraints on some of the connection weights. Applications of the proposed method to examples with 3 variables demonstrate that it can actually obtain a criterion function satisfying these conditions. Detailed analyses are given to the results with different initial gradient vectors. Furthermore, real data on second-hand houses are analyzed by the proposed method. The present method, however, is restricted to the estimation of only diagonal positive semidefinite quadratic criterion functions. Its extension to estimating positive semidefinite quadratic criterion functions is left for further study.