Transactions of the Operations Research Society of Japan
Online ISSN : 2188-8280
Print ISSN : 1349-8940
ISSN-L : 1349-8940
Volume 55
Displaying 1-18 of 18 articles from this issue
  • Article type: Cover
    2012Volume 55 Pages Cover1-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (20398K)
  • Article type: Appendix
    2012Volume 55 Pages App1-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (59K)
  • Article type: Appendix
    2012Volume 55 Pages App2-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (59K)
  • Article type: Appendix
    2012Volume 55 Pages App3-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (59K)
  • Shingo Nakanishi
    Article type: Article
    2012Volume 55 Pages 1-26
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    In this study, gambling is defined as gaming. Its banker is called as the gaming enterprise. Winners are defined as the players who earn over the principal. Losers mean the players who cannot earn over the principal. In this study, we use that the repetition trial of coin toss or dice throwing is approximated to normal distribution from each discrete distribution. And its gaming includes the charge by the gaming enterprise based on its trial number. Above mentioned, we set three conditions for examining several characteristics. That is to say, Case 1 is that the probability of head on coin toss is equal to 0.5. Case 2 is that the probability of head on coin toss is less than 0.5. Case 3 is that each probability of dice throwing is same. Moreover, we restrict that all players do not admit leaving from the market so that the player keeps repeating the gaming based on independent and identically distributed. Then, the tendency both the maximum value of the sum total of the expected gain of winners and the expected earnings of gaming enterprise is shown when we use the power regression analysis. Moreover, we can visualize the tendency that both winners' risk averse and loser's risk preference characteristics are considered from the obtained outcome as a reflection effect with charge by the gaming enterprise under an uncertain situation. This is assumed to be two power functions concerning the expected gain. In addition, it is derived that expected earnings of the gaming enterprise and the expected gain of winners is balanced in the stop trial number by proposed model. Finally, we can show that the decision making about stopping rule of gaming trial with including its charge is useful in accordance with the maximization for sum total gains of winners.
    Download PDF (1691K)
  • Tsutomu Suzuki
    Article type: Article
    2012Volume 55 Pages 27-41
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    This paper presents the formulation of facility location problem for the integration of two-kind facilities, in which planners seek optimal location or selection among existing two-kind old type facilities for emerging new type facilities that minimize users' travel distance to closest service outlets. The model is applied to the unification of excessive kindergartens and insufficient nurseries. The model can yield optimal combinations of two-kind facilities that are suitable for conversion to new type facilities, given the changing geographical feature of supply/demand for child education or care services.
    Download PDF (1049K)
  • Norio Hibiki, Kenzo Ogi, Masahiro Toshiro
    Article type: Article
    2012Volume 55 Pages 42-65
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Probability of default (PD) of a small company is estimated by the credit scoring model which mainly includes financial indices. Default is affected by not only a specific factor but also common factors. It is desirable to include the macroeconomic factors as explanatory variables in order to improve the accuracy of the estimated PDs. However, we have a serious problem that there are not enough time series data of default to determine the macroeconomic indices by the regression model. Recently, we begin to recognize a strong need to model the credit scoring with macroeconomic variables because the actual default rates (DRs) are higher than the estimated PDs by the serious downturn in economy from about 2007. In this paper, we determine the macroeconomic indices by using about 540,000 of loan data in Micro Business and Individual Unit of Japan Finance Corporation, and compensating for the lack of the time series data of macroeconomic factors. As a result of the analysis, we find that the previous default rate in a month is significant. We improve the accuracy of the estimated PDs by using the modified credit scoring model with the previous default rate in a month. The difference between the estimated PDs and the actual DRs can be reduced at a maximum of 0.72%.
    Download PDF (1939K)
  • Tomonari Kitahara, Shinji Mizuno
    Article type: Article
    2012Volume 55 Pages 66-83
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    The simplex method is the most fundamental method for solving linear programming problems(LPs) and it can efficiently solve large-scale actual LPs. However, it has been an open question whether the simplex method is a polynomial algorithm or not. In addition, for some pivoting rules, LP instances which need an exponential number of iterations are known, such like the Klee-Minty problem. Recently Kitahara and Mizuno obtained an upper bound for the number of different basic solutions generated by the primal simplex method with Dantzig's rule. The bound is represented by the number of constraints, the number of variables, and the minimum and the maximum values of all the positive elements of primal basic feasible solutions. By calculating the actual number of different basic solutions for an LP instance, they showed that the bound is almost tight. In this paper, we summarize these results and explain related results by using examples and figures.
    Download PDF (1383K)
  • Keisuke Takaya, Norio Hibiki
    Article type: Article
    2012Volume 55 Pages 84-109
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    This paper discusses a stochastic programming approach for a multi-period portfolio optimization problem. Hibiki (2001) advocates the simulation/tree hybrid model for solving a multi-period optimal asset allocation problem with Monte Carlo simulation. We propose a linear approximation model based on the hybrid model, and compare it with other optimization models in the numerical examples, such as the stochastic programming models (i.e., the simulation model and the hybrid model), continuous-time models in which analytical solutions can be derived, and a Monte Carlo regression model proposed by Brandt et al. (2005). The results show better values can be derived in the linear approximation model than other models in the problems of CRRA utility function and the first-order lower partial moment which is one of the downside risk measure. In addition, the investment ratio at the initial time has got closer to the analytical solution. The objective function value of our model is improved in comparison with the hybrid model under the same scale problem. Especially in the CRRA case, the expected utility is larger and the investment ratio dependent of the state has become closer to the analytical solution than the Monte Carlo regression model. These results indicate that our model has the desirable properties for solving a multi-period portfolio optimization problem.
    Download PDF (2212K)
  • Saeko Kimura, Hiroshi Yabe
    Article type: Article
    2012Volume 55 Pages 110-131
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Support vector machines (SVMs) have been paid attention to for solving binary classification problems. SVMs usually use a positive definite kernels in many applications. On the other hand, SVMs with indefinite kernels are studied in this decade, because such SVMs take advantage of application-specific structure in data. Recently Luss and d'Aspremont (2009) formulated a convex optimization problem to deal with them. Their formula came from amax-min problem with a penalized term which controled the distance between the original indefinite kernel matrix and the proxy positive semidefinite kernel matrix. They gave a projected gradient method to solve the problem. However their method needs to calculate eigenvalues and vectors of a matrix corresponding to a given indefinite kernel matrix. In this paper, we first introduce the Barzilai and Borwein method instead of the gradient method of Luss and d'Aspremont to accelerate the method in practical computation. Secondly, we propose a new formulation of SVMs with indefinite kernels to overcome the defect that the model of Luss and d'Aspremont needs eigenvalues and vectors of a matrix. Since our formula is represented by a quadratic optimization problem, it can be easily solved by a suitable numerical method like the SMO method. Finally we give some numerical experiments to investigate numerical performance of our method and the generalization performance of our formulation.
    Download PDF (1323K)
  • Yasuhiro Kanai, Keiji Abe
    Article type: Article
    2012Volume 55 Pages 132-148
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    We have examined which strategy should be adopted in order to expand sales volume under the market expansion. The analysis is done by using a simulation model of pricing process built from the macro perspective as well as the actual sales data. We derived the operation method of the market expansion variables from the model, and clarified the grounds of the method. Moreover, by comparison with the simulation results and the actual sales data, we observed that the absolute value of power index of price-sales distribution increases along with the market expansion. When companies with the resource constrained aim to ensure new entrants to the market, they often adopt the specialization strategy to the price range. We found that the specialization strategy to the low-end is more effective than the high-price products in such a case.
    Download PDF (1531K)
  • Ikuko Takagi, Takafumi Matsuura, Kazumiti Numata
    Article type: Article
    2012Volume 55 Pages 149-160
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    We propose new exact solution methods for cyclic Fair Sequence Problem (cFSP). The cFSP is a kind of scheduling problem to allocate fractional executions of given jobs to the cyclic sequence of unit-time slots with the given periodical length, where respective jobs must be processed in specified number of slots per period, and with as equal intervals as possible. The measure of equal intervals is somewhat ambiguous. Corominas et al. presented exact solution method for cFSP with basic equality measure that minimizes the sum of square of differences between actual and ideal intervals. They reported that instances with less than or equal to 40 periodical time slots or so can be solved through their formulation by a general purpose mixed integer programming (MIP) solver. We propose three formulations to solve cFSP based on the partition of cyclic slots by pre-generated job allocation patterns: simple partitioning formulation, improved formulation with reduced patterns and Traveling Salesman problem (TSP) like formulation considering patterns as salesman's tours. These formulations can flexibly treat any measure of equality as far as it based on the difference between actual and ideal intervals. The results of numerical experiments to evaluate proposed methods show that TSP-like formulation outperforms existing ones and that it succeeds to solve 40% or so larger instances than the existing best.
    Download PDF (989K)
  • Ken-ichi Tanaka, Takehiro Furuta
    Article type: Article
    2012Volume 55 Pages 161-176
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    The flow-capturing location problem (FCLP), originally proposed by Hodgson (1990), identifies desirable facility locations on a network for flows traveling between various origins and destinations. The model aims to locate a given number of facilities on a network to maximize flows that have at least one facility along their travel route. Since the users cannot get off the train freely, it is easier for them to access a facility at the station where their train stops. For example, facilities at the express stations are easier to access when riding on an express train. To model this aspect, we proposed a model in which flows can only be covered at stations where the train stops, and it depends on the train types. FCLP and its variants assume that the number of facilities is a fixed input parameter. Often, however, a decision maker has a constraint on a total budget limit to locate facilities instead of the number of facilities. This paper introduces fixed costs of opening facilities into FCLP, making the number of facilities one of the decision variables. We present an integer programming formulation of the proposed model, and apply it to analyze optimal facility locations among stations of Keio Railway Network consisting of 6 railway lines and 69 stations. Commuter traffic flow data and the costs to post a billboard advertisement at each train station are used. Optimal solutions of Hodgson's FCLP and FCLP with fixed costs are obtained by a mathematical programming solver, IBM ILOG CPLEX. By comparing solutions of two models, it is shown that optimal solutions of the proposed FCLP tend to capture large volumes of flows within a relatively small facility location costs.
    Download PDF (1370K)
  • Yuya Suzuki, Junichi Imai
    Article type: Article
    2012Volume 55 Pages 177-193
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    This paper proposes an efficient quasi-Monte Carlo method to compute risk measures of a portfolio that is used for risk management. To evaluate the risk in the portfolio it is necessary to take different types of risk into consideration, such as market risk and interest rate risk. To that end, we introduce grouped generalized hyperbolic distribution. This distribution has two distinctive features. First, the marginal distribution is sufficiently flexible to describe a semi-heavy tail indicating that it can capture a potential extreme value. Second, the distribution can model quite different shapes of distributions for different types of risk with non-linear dependence structure by introducing the concept of group. These two features are crucial for the quantitative risk management. In this paper, we show that the proposed distribution can consider the features with a simple yet sufficiently flexible manner. We then develop an efficient quasi-Monte Carlo procedure to evaluate risk measures by employing a dimension reduction method, which is originally proposed for valuing financial options. In our numerical experiences, we compute two well-known risk measures; Value-at-Risk (VaR) and Expected Shortfall(ES), in the presence of both market and interest rate risk, and demonstrate that the proposed method can enhance the numerical efficiency of the simulation in calculating these risk measures.
    Download PDF (1387K)
  • Article type: Index
    2012Volume 55 Pages 194-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (58K)
  • Article type: Appendix
    2012Volume 55 Pages App4-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (63K)
  • Article type: Cover
    2012Volume 55 Pages Cover2-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (72K)
  • Article type: Cover
    2012Volume 55 Pages Cover3-
    Published: 2012
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (72K)
feedback
Top