Inventory has both aspects of cost and asset. Many companies have implemented inventory reductions taking into account the cost aspect of inventory such as storage cost and depreciation expense. The illiquid stock leads to cash flow stagnation that should be reduced as a negative asset because it will become an expense in the future. On the other hand, inventory is a necessary management resource that produces sales and value thereby increasing cash flow. Excessive inventory leanness would decrease the performance of the company.
This research proposes a model for evaluating inventory in terms of not only of cost, but also asset value considering the relationship with demand fluctuations and production processes. Inventory is regarded as a distinct activity in the process, and its value is evaluated taking into account the price of the final product, possibility of sales, resource losses and uncertainty of unrealized profit.
Inventory value is computed applying the model to the manufacturing process of fabric wire for papermaking. The result is that the value of semicompleted product inventory is larger than that of the final products when the possibility of sales is low. It shows that having semi-completed product as inventory is likely to be more profitable than having final product as inventory.
There are many network systems in the world; for example, Internet, electricity networks and traffic networks. In this study, we consider two-objective network design problem with all-terminal reliability and construction/operation/maintenance costs. There is a trade-off relation between reliability and costs. In general, it is a rare case that a network system solution simultaneously provides both optimum all-terminal reliability and optimum cost. Therefore, we must consider an algorithm for obtaining Pareto solutions. The reliability and cost problems for network systems have been studied for a long time and numerous papers have been published. Existing algorithms are efficient for calculating only the all-terminal reliability for a network. However, these are inefficient for obtaining Pareto solutions, as we must calculate the all-terminal reliability and cost for all sub-networks. Therefore, algorithms require much time to obtain Pareto solutions when the number of nodes or edges is large. To ensure efficient calculation to obtain the Pareto front, we propose an algorithm that does not need to consider all sub-networks. The algorithm we propose selects parts of the networks for calculation. We researched relations between edges that tended to construct Pareto solutions and other edges, and obtained some properties that Pareto solutions are likely to satisfy to use in this process. These properties generate the search space in which networks take close values to Pareto solutions. Combining properties found in our study, we construct algorithms for obtaining Pareto solutions, which restricts the number of networks that must be calculated. The Pareto solutions obtained using this reduction are probably a proper subset of Pareto solutions. Therefore, we evaluate the computing time and accuracy of the algorithms proposed using numerical experiments.
Mixed-level orthogonal arrays are used in experimental designs and the result of the experiment is sometimes applied to the response surface. It is known that in the case of L18 and L36, the two-factor interaction effects are not equally confounded with the effect of each column of the orthogonal array; however, the problems that occur when they are applied to the response surface have not been studied yet. In this paper, we classify the confounding patterns by visualizing the confounding and examine which columns should be allocated based on the condition number. As the result, we can show the best allocation of the mixed-level orthogonal arrays when they are applied to the response surface.