We develop a theoretical model to evaluate settings of artificial markets considering a realistic pricing mechanism. We show the model can evaluate the settings in an environment in which a dynamic micro mechanism plays an important role, for example, a price rebound after a sharp fall in stock markets. Styled facts, which are statistics for long term, can not evaluate such a dynamic situation. We emphasis that such a dynamic situation which the styled facts can not evaluates is very important to analyze market crush and/or market regulations.
Many operational decisions of a company or an organization can be captured as a combinatorial optimization problem and, when the problem is clearly defined and appropriately formulated, it can be handled by a decision maker with the help of a suitable computerized algorithm. However, in a practical situation, it is often the case that the information required for clearly defining the problem is not fully available for a single decision maker but is dispersed among multiple stakeholders. This makes the problematic situation ill-defined and difficult to be dealt with properly by the decision maker alone. Thus, this paper takes up an undefinable shortest path problem as an example and proposes a prediction market approach for collectively solving it with a team of stakeholders. The approach aggregates the dispersed information on the problematic situation from the stakeholders through the market mechanism. After modeling the ill-defined situation by a shortest path problem with uncertainties in arc lengths, the paper discusses how to design the prediction security and market institution for collectively resolving the situation. Then, it conducts laboratory experiments to investigate how the proposed approach actually works. It further discusses how to generalize the approach to the case where the topology of the network is also uncertain.
This study presents a computer simulation model to analyze the risk of transmission of financial distress in a bank credit network and the knock-on defaults of banks. The impact is quantified, which is imposed on the number of defaults by the topology of the bank credit network, the balance sheet of banks including equity capital ratio, and the capital surcharge on big banks.
The concept of the ``wisdom of crowds'' has attracted attention for finding new insights by appropriately processing the large amount of information possessed by crowds. A prediction market is one estimating method that uses the mechanisms of financial markets such as stock or exchange markets to realize the ``wisdom of crowds''. In this study, we use agent-based simulation to clarify the condition that makes prediction markets effective. An artificial market is a virtual financial market run on a computer. Agents participate in them as computer programs that play the role of virtual dealers. In the simulation, we confirm the influence of the following parameters: information transmission frequency, the retention of motivation, and the gap of information recieve abilities. The results of this study suggest that prediction markets realize more accurate results than opinion polls under the following conditions: the gap of information recieve abilities and relatively low motivation.
Securities analysts disclose their opinion on stocks and write reports indicating how they believe the target firm will perform against the market index. Many brokerage firms issue ratings based on five scales, namely, ``strong buy'', ``buy'', ``neutral'', ``sell'' and ``strong sell''. Empirically, it is known that firms downgraded from `strong buy' to `buy' lose value regardless of the fact that the analyst's signal is still positive. We investigate characteristics of firms that lose large market value in the post-downgrade period. Using data-mining approach, we found higher pre-downgrade volatility is strongly associated with the negative return in the post-downgrade period. Among high volatility firms, small capitalization stocks and stocks with inferior sentiment are particularly vulnerable to such downgrades. The result is consistent with the hypothesis in the field of Finance, i.e. the higher the disagreement level among investors in the market more overvalued the stock remains.
The dealer model is an agent based model that simulates the simplified dealer's behavior and satisfies various empirical laws of the foreign exchange markets by tuning major three parameters. In this study,we improve the dealer model to satisfy a newly established empirical law about widening of spread as a response to big market price changes. As a result when a big news occurs and the market becomes turbulent, this new model can reproduce broadening of distribution of price change.In a peculiar price change of official intervention in the foreign exchange market by Bank of Japan, this model can be used for estimation of strategies of intervention and responses of the market.
We aim to develop a new factor of stock BBS postings that is different from our BMB factor. In our previous study, the contents of stock BBS postings are classified into two categories; i.e. the bullish postings and the bearish postings, and our BMB factor is based on these categories. The results of recent study suggest that the contents of stock BBS postings may be represented by employing more than one index. To develop a new factor, we use a morphological analysis and a PCA to analyze the contents of stock BBS postings. As results, we develop a new factor that based on principal component score and it represented the return of stock, and in that they are not correlated with our BMB factor.
Standard & Poor's (S&P) downgraded American government bonds from AAA to AA+ last year. The effects of the downgrade on financial markets have been studied in financial engineering, economics and computational finance, but not in agent-based simulation studies. In this paper, we investigate the effect of the rating system (e.g.: S & P) on asset price fluctuations in the artificial market, which is the agent-based simulation model of the financial market. The rating information is defined as a discrete version of the fundamental value. Four strategies : the noise trader, the fundamentalist, the trend predictor, and the contrarian trader, were assumed in previous studies of the artificial market, plus we assume a new agent called ``rating user'' which uses the rating value, defined as the discrete value of the fundamental value of an asset. We investigate if the rating user makes the artificial market unstable. First, the simulation results show that kurtosis of an asset price return in the market, without fundamentalists, is higher than without rating users. This suggests the usage of rating information makes the artificial market unstable. The simulation outcomes also suggest volatility continuity of asset price return is stronger in the market without fundamentalists than without rating users. Second, we investigate how two parameters, the update interval and rating length, which control the rating value, makes the market stable. The simulation outcomes show that both standard deviation of asset price return and kurtosis of asset price return becomes smaller as the update interval increases. The standard deviation gets larger and kurtosis of that gets larger with the increasing length of rating. These results imply that the rating information should be updated at short intervals and the length of rating should be moderate to make the artificial market stable.
This paper describes a function of automatic problem generation of high school level mechanics problems for adaptive exercises. We have defined derivative problems from view point of specialization-generalization or partialization-expansion of solution structure of the original problem. In this framework, first, a problem is characterized by a situation model the problem belongs to and solution structure that describes the way to solve it. Specialization or generalization is applied for a situation model. Relation between situation models from view point of specialization or generalization are described as a microworld graph, and by following the connections, a problem can be specialized or generalized. Partialization or expansion is applied for solution structure. By partializing or expending solution structure of the original problem, partialized or expand problems are generated. We call these generated problems ``derivative problems'' for the original one. We have already implemented the automatic generation function of these derivative problems. Two experimental evaluations of the function and generated problems were carried out. This paper described the results of the experiments.
Support Vector Machines, when combined with kernels, achieve state-of-the-art accuracy on many datasets. However, their use in many real-world applications is hindered by the fact that their model size is often too large and their prediction function too expensive to evaluate. In this paper, to address these issues, we are interested in the problem of learning non-linear classifiers with a sparsity constraint. We first define an L1-regularized convex objective and show how to optimize it, without constraint. Next, we show how our approach can be naturally extended to incorporate a contraint by constrained model selection. Experiments show that, compared to SVMs, our approach leads to much more parsimonious models with comparable or better accuracy.