Journal of the Operations Research Society of Japan
Online ISSN : 2188-8299
Print ISSN : 0453-4514
ISSN-L : 0453-4514
Volume 41, Issue 4
Displaying 1-20 of 20 articles from this issue
  • Article type: Cover
    1998 Volume 41 Issue 4 Pages Cover10-
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (16K)
  • Article type: Appendix
    1998 Volume 41 Issue 4 Pages App6-
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (58K)
  • Tsutomu Konno, Hiroaki Ishii
    Article type: Article
    1998 Volume 41 Issue 4 Pages 487-491
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    We have already introduced many fuzzy concepts to scheduling problem and discussed so called fuzzy scheduling problems. In this paper, we introduce fuzzy allowable time concept newly to two identical machine problem, which is a fuzzy version of M. R. Garey and D. S. Johnson. That is, there are two identical machines and fuzzy ready times and deadlines are associated to each job, i.e., membership function representing a satisfaction degree of start times and completion times are considered to each job and minimal one of them is to be maximized. Further among jobs, there exists fuzzy precedence relation, which is a fuzzy relation with membership function representing satisfaction degree of precedence order of each job pair and again minimal one of them is to be maximized. The aim is to maximize both minimal degrees at a time if possible, but usually there exists no schedule maximizing both of them, and so we seek nondominated schedules.
    Download PDF (410K)
  • Hideaki Yamashita, Hiroshi Ohtani, Shigemichi Suzuki
    Article type: Article
    1998 Volume 41 Issue 4 Pages 492-508
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    We consider a simple automatic warehousing sytem with a lot of storage spaces called slots in which only a single type of items are stored and retrieved. The purpose of this paper is to develop an efficient method for obtaining the marginal probability that each of the slots in the warehouse is full. The set of such probabilities for all the slots constitutes the spatial inventory distribution of the items in the system. This distribution, which we call the inventory distribution in short, enables us to evaluate key performance characteristics of the system such as the mean travel time of a crane for a single operation of storage or retrieval of items. Such characteristics cannot be obtained from the distribution of the total number of full slots alone. We assume that inventories are controlled by an (s,S) reordering policy, and that received items are stored from the closest open slot to an I/O point and retrieved items are chosen randomly among currently full slots. Furthermore the time between retrievals and the time between placing of order and receipt of items are exponentially distributed. Under these assumptions the system can be modeled as a Markov chain. If we use joint inventory levels of slots, the number of states of the Markov chain amounts to 2^m, where m is the total number of slots. Here we devise exact aggregation methods of the states to reduce the size of the Markov chain. By exploiting the special structure the total computational complexity for obtaining the inventory distribution is reduced to O(m^4).
    Download PDF (1406K)
  • Kazuyuki Hiraoka, Shuji Yoshizawa
    Article type: Article
    1998 Volume 41 Issue 4 Pages 509-530
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    The "lob-pass problem" is a model which is used in the psychology. It describes the phenomena that the same choices decrease the effect, like the experience or the weariness. Abe and Takeuchi formulated it as an on-line learning problem, and pointed out that it is an extension of the multi-armed bandit problem. In the lob-pass problem, the player's choices will change the environment itself. This is the difference from the multi-armed bandit problems. The all proposed strategies for the lob-pass problem repeat the following procedures: (i) observe the reaction from the unknown environment (ii) estimate the environment (iii) find the optimal "stationary" strategy for the estimated environment (iv) determine the choice according to the strategy. Moreover, the criteria for the strategies in these studies are the loss due to uncertainness of the environment, compared with the optimal "stationary" strategy for the known-environment case. To judge whether such policies are appropriate or not, we have to know the optimal strategy, which may not be "stationary", for the known-environment case. It is calculated in the present paper. It is also shown that the "matching condition" assumed in the past studies is the necessary and sufficient condition that the optimal strategy doesn't depend on the stopping time of the game. The meaning and the appropriateness of the matching condition are discussed. Finally, the asymptotically optimality is defined. We prove that the stationary strategy can be asymptotically optimal for the opponent with the forgetting factor, but no strategy is asymptotically optimal for the opponent without the forgetting factor.
    Download PDF (1825K)
  • Akihiro Hashimoto
    Article type: Article
    1998 Volume 41 Issue 4 Pages 531-537
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    In the original DEA/CCR (Data Envelopment Analysis/Charnes, Cooper and Rhodes) computation with n DMUs (Decision Making Units), we cannot make do with solving n LP (Linear Programmiing) problems even to judge only whether each DMU is DEA efficient or not in using ordinary LP solvers. This is because we must use two-phase optimization unless we have access to DEA software packages taking non-Archimedean infinitesimals into consideration. We must solve n Phase I LPs for all the n DMUs plus Phase II LPs to see whether DEA inefficient DMUs on the extended frontier are. This paper shows that, through solving nearly n LPs, we can achieve it if we use the DEA exclusion model instead of the standard DEA model, etc. We should note a merit of the DEA exclusion model for reducing DEA computation load as well.
    Download PDF (551K)
  • Fumio Ishizaki, Tetsuya Takine, Yuji Oie
    Article type: Article
    1998 Volume 41 Issue 4 Pages 538-559
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    In this paper, we consider a TCP-like sliding window control with delayed information. We propose a deterministic fluid-flow queueing system which represents a situation where many TCP connections share a bottleneck link of ATM networks. We provide various properties of the transmission rate function (which is a function of time) to ensure that, with the analytical model, we can compute the throughput performance of TCP over ATM networks. Numerical results show that the synchronization of TCP window control yields heavy degradation of the throughput performance. Further we observe that the throughput is not a continuous function of the peak rate and there exist some regions where the behavior of the throughput has different characteristics. Such complex behavior of the throughput is caused by the complex behavior of the window control.
    Download PDF (1545K)
  • Masatake Nakanishi, Eizo Kinoshita
    Article type: Article
    1998 Volume 41 Issue 4 Pages 560-571
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    This paper proposes a new method of "group decision making stress" and studies its application to the Analytic Hierarchy Process for the purpose of effective group decision making. This method grades the evaluators in such a way to minimize the sum total of each evaluator's frustration, the "group decision making stress," without operating the raw data of each evaluator's preference. By rationally grading the participants, those who tend to share similar preferences with others would be graded relatively high and those with unique preferences would be graded relatively low. Every preference, however, is appropriately taken into account, and the result shall be fair. Applications of the method will allow an easier search of groups with similar preferences and help to converge the group preference.
    Download PDF (1242K)
  • Atsuko Ikegami, Akira Niwa
    Article type: Article
    1998 Volume 41 Issue 4 Pages 572-588
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    This paper deals with the scheduling of nurses to staff shifts at a hospital. The basic difficulty is the necessity of maintaining a certain level of service and skill in the makeup of every shift, while balancing the workload among the nurses involved. As a result it is usually impossible to develop a schedule which satisfies all the requirements, in spite of the time and resources spent in the effort. In this paper we present an efficient approach to this scheduling problem whose constraints are of block-angular structure: it consists of blocks of constraints which can be dealt with independently without a set of coupling constraints. Each of these blocks corresponds to a set of requirements for a specific nurse, and the coupling constraints are associated with the requirements in developing the overall makeup of each shift. An objective function is first defined to measure the degree of violation caused by a schedule. We set out to optimize a problem defined by this objective function and the block of constraints for one nurse, given that the other nurses' schedules are fixed as specified in the current trial schedule. (For the first trial schedule, we used one which leaves all the nurses unassigned.) Using this trial schedule throughout, we optimize every nurse's schedule in turn by fixing those of the other nurses. Out of the resulting schedules, every one of which differs from the trial schedule only in assignments for one nurse, we choose the one with the minimal value for the objective function. This becomes the new trial solution for the next iteration, and we repeat this iterative process until a satisfactory and hopefully feasible schedule is obtained. We have implemented this approach and constructed an algorithm for a 2-shift case. As the night shifts present more stringent and tighter constraints, if first finds a schedule to satisfy them, and then seeks a schedule to satisfy the daytime constraints. This approach is particularly effective where there are many constraints.
    Download PDF (1728K)
  • Yutaka Sakai, Yutaka Takahashi, Toshiharu Hasegawa
    Article type: Article
    1998 Volume 41 Issue 4 Pages 589-613
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    We study a discrete-time single server priority queueing model with vacations under random order of service discipline within each class. This model captures the behavior of the head-of-line request queues in large input-buffered ATM switches. The server takes vacations when the queue has been empty for a random number of slots. We presume a message consists of a geometrically distributed number of cells. To represent this aspect, we assume that once a message gets in turn for service, it is served for a constant time which corresponds to one-cell-time and it rejoins the queue after service with a given probability. We derive the joint probability distribution of the queue length and waiting time through probability generating function approach. Mean waiting times are obtained and their numerical results are shown.
    Download PDF (1793K)
  • Toshio Hamada
    Article type: Article
    1998 Volume 41 Issue 4 Pages 614-625
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    When we make document files by a computer or a wordprocessor, we sometimes meet with the accident that all the files having been stored in the floppy disk are lost and cannot be reconstructed. In order to avoid this kind of accident, we sometimes make a backup copy of the floppy disk which is useful when the original floppy disk is broken. We make a stochastic model which is useful to make a decision about whether or not to make a backup disk when there are k files in the floppy disk whose backup copy does not exist, we should make remaining n new files, and the probability that the accident occurs is p. The problem is formulated by dynamic programming and several properties of the optimal strategy are obtained. The case that the true value of p is unknown is also discussed and several properties have been obtained.
    Download PDF (795K)
  • Satoru Fujishige
    Article type: Article
    1998 Volume 41 Issue 4 Pages 626-628
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    M. Stoer and F. Wagner and independently A. Frank have found a simple proof of the validity of Nagamochi and Ibaraki's min-cut algorithm. This note points out some nice property of the behavior of Nagamochi and Ibaraki's min-cut algorithm, which also gives another simple proof of the validity of their algorithm. The proof relies only on the symmetric submodularity of the cut function. Hence, it also gives another simple proof of the validity of Queyranne's extension of Nagamochi and Ibaraki's algorithm to symmetric submodular function minimization.
    Download PDF (272K)
  • Ryusuke Hohzaki, Koji Iida
    Article type: Article
    1998 Volume 41 Issue 4 Pages 629-642
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    This paper investigates a search game of a searcher and a target. At the beginning of the search, the target selects his path from some options and the searcher determines the distribution of his available search resources into a search space which consists of discrete cells and discrete time points. The searcher gains a value on detection of the target while he expends the search cost depending on the allocation of the search resource. The payoff of the search is the expected reward which is defined as the expected value minus the expected search cost. The searcher wants to maximize the expected reward and the target wants to minimize it. We formulate the problem as a two-person zero-sum game and reduce it to a concave maximization problem. We propose a computational method to obtain an optimal solution of the game. Our method proceeds in such a way that one-sided problems generated from the original game are repeatedly solved and their solutions converge asymptotically to an optimal solution of the game. By some examples, we examine the effect of parameters included in the problem upon an optimal solution to elucidate some characteristics of the solution and the computational time of the proposed method.
    Download PDF (1154K)
  • Article type: Appendix
    1998 Volume 41 Issue 4 Pages 643-645
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (306K)
  • Article type: Bibliography
    1998 Volume 41 Issue 4 Pages 646-
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (30K)
  • Article type: Appendix
    1998 Volume 41 Issue 4 Pages 647-
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (68K)
  • Article type: Index
    1998 Volume 41 Issue 4 Pages 648-650
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (192K)
  • Article type: Appendix
    1998 Volume 41 Issue 4 Pages App7-
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (79K)
  • Article type: Cover
    1998 Volume 41 Issue 4 Pages Cover11-
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (57K)
  • Article type: Cover
    1998 Volume 41 Issue 4 Pages Cover12-
    Published: 1998
    Released on J-STAGE: June 27, 2017
    JOURNAL FREE ACCESS
    Download PDF (57K)
feedback
Top