This research addresses the risk management of Japanese equity portfolios composed of the equities with large skew in their returns. Multivariate normal distribution is usually adopted in portfolio risk management due to its convenience. However, for the risk management of the above portfolio, we should adopt more flexible distribution with skewness and kurtosis for each equity return distribution and utilize t copula function to incorporate correlations among the return distributions.
To shed some light on the risk management, this research examines it from the following three viewpoints. First, as each equity return distribution, not only normal distribution or t-distribution but also generalized hyperbolic skew Student's t-distribution, which is able to capture skewness and kurtosis of the return distribution are adopted. Second, the accuracy of the tail risk measure is examined from the view of both percentile point and the numbers of the equities in the portfolio.
The result suggests that the combination of generalized hyperbolic skew Student's t-distribution for each equity return distribution and the copula with the correlations in the market stress generates relatively good tail risk measure for 1.0% to 5.0% percentile points and the numbers of equities in the portfolio.
In this paper, the new postseason format from 2015 season of J1 League, the top division of professional football in Japan, is simulated and analyzed statistically. Official regulation defines that the new postseason from the 2015 season consists of five teams selected by different principles — the top three teams by total season points, and the winning teams of the 1st and 2nd half of the season.
The simulation results concludes that there are overlaps within these five teams very frequently, therefore the postseason tournament will be held with 3, 4, and 5 teams (i.e., without any overlaps) with probability 62%, 35%, and 3%.
The result is obtained by using numerical simulation of 105 seasons. The goal scoring model is based on the results in 5 years (from 2010 to 2014 seasons), and is constructed by regression analysis technique.
This result clarifies that the new postseason format is inherently one-stage, NOT two-stage as officially defined, format because the selection condition for the postseason tournament is not designed correctly.
A double-track auction is a new auction mechanism to allocate multiple indivisible goods to buyers, who view items as both substitutes and complements. This auction mechanism is significant since existing auction mechanisms with multiple goods can only allocate substitutes. One experimental study with two goods and two buyers, however, suggests that double-track auctions do not work as the theory predicts due to interference from a theoretically unnoticed price called the “pitfall” price. This study investigates the performance of a double-track auction with an experiment incorporating controls on the pitfall price. In the experiment, two goods are for sale. Two buyers view these goods as complements and bid for them. The experiment consists of two conditions: one with and the other without the pitfall condition. The main result is that the pitfall price negatively affects performance. The pitfall condition achieves competitive equilibrium at a rate that is 60% lower than that without the pitfall condition. However, without the pitfall condition, the auction works as the theory indicates. Therefore, double-track auctions allocate items only when there is no pitfall price.
Using optimization to minimize the maximum vote-value disparity and enumeration for the districting problem, we can obtain a lot of candidates of the electoral district. However, it is difficult to choose a good constituency among them. Because there is no indication of other except the gap in the value of votes. The Supreme Court accepts the discretionary authority of the Diet while assuming one vote of difference the most important matter. However, it is difficult to judge whether the exercise of any such discretionary power of the Diet was appropriate. The purpose of this research is to propose a closeness and a degree of divergence as a new index for the decision making and evaluation. The closeness is an indicator to estimate the intimacy level between the municipal districts constituting an electoral district. The degree of divergence is an indicator to measure the degree of estrangement with the current electoral district. In addition, it shows that these indexes are useful in decision-making and evaluation of the validity.
In the case of disasters such as tsunamis, people should be quickly evacuated from the area affected by the disasters. In this article, we consider a dynamic network flow model of the evacuation planning for the people in the affected area. In the model, we represent the evacuation of the people as the dynamic flow, and the effective evacuation plan as the lexicographically quickest flow. More specifically, we show the model in which the capacity constraint of refuges is taken into account. We conduct computational experiments using the geospatial information and census data of local cities in Japan.
This paper describes how to measure and estimate synergy effects of advertising (cross-media effects) assuming consumer heterogeneity by using hierarchical bayesian ordered logit model. In this study, we carried out an experiment to measure and estimate the cross-media effects. The results suggests that the synergy effects are associated with brand preference measure. Moreover, the results suggest that the more the consumer likes the brand or industry, the bigger each advertising effect becomes. This results support both AIDMA theory and the weak advertising theory by Ehrenberg. However, cross-media effect has both plus and minus effect associated with brand preference measure. This result suggests that there are two types of effect, one is weak advertising effect, another is diminishing effect by repeat exposure of advertising.
A number of Japanese banks have utilized credit scoring models to manage their debtors' credit risk. It is common to utilize a logistic regression model in order to calculate credit scores of small sized firms with twenty or less employees, linked to the correlations between financial indicators and default occurrence. However, the levels of the correlations of small sized firms are lower than those of medium or large sized firms, since the most of small sized firms are run by owner's family members, and deficits incurred from their businesses are compensated with the private assets of owner. Hence, the accuracy ratio of the credit scoring model for small sized firms is not as high as we expected. Hibiki, Ogi and Toshiro (2010) suggested the use of “firm age” as a variable in the model, and analyzed it by using a data set of more than 480 thousand Japanese small sized firms for the period from 2004 to 2007. The result is that they reveal the default occurrence rate measured by the firm age can be expressed by the cubic function, and confirm the introduction of the cubic function into the model as one of variables which improves the accuracy ratio for small sized firms. However, the robustness of our model is not sufficiently confirmed because only four-year data was available. In this paper, we extended the data period from 2004 to 2011, and analyzed it by using a data set of more than a million Japanese small sized firms. The result confirms that our model is sufficiently robust. In addition, we are able to reveal that (i) the effect of firm age is robust in terms of time series, (ii) we can use the cubic function of the firm age as a proxy variable of the private assets of owner.
This paper treats numerical methods to solve systems of nonsmooth equations. Such problems arise in solving variational inequality problems, complementarity problems and so forth. Although smoothing Newton methods are known as efficient methods for solving systems of nonsmooth equations, these cannot be applied directly to large-scale problems because of the storage of memories for matrices. On the other hand, particular attention is paid to conjugate gradient methods for solving large-scale unconstrained optimization problems, because they do not require the use of matrices.
In this paper, combining the smoothing technique and the PRP type scaling conjugate gradient method, we propose a smoothing and scaling conjugate gradient method which does not use any matrices. Moreover, we show its global convergence. Finally, some numerical results are given.