This paper presents an approach that integrates fuzzy integral and fast heuristic search for improving the quality of unit micromanagement in the popular RTS game StarCraft. Unit micromanagement, i.e., detailed control of units in combat, is one of the most challenging problems posed by RTS games and is often tackled with search algorithms such as Minimax or Alpha-Beta. Due to vast state and action spaces, the game tree is often very large, and search algorithms must rely on evaluation methods from a certain limited depth rather than exploring deeper into the tree. We therefore attempt to apply fuzzy integral and aim for an evaluation method with high accuracy in the search. To achieve this aim, we propose a new function that allows fuzzy integral to cope with not only non-additive properties but also unit properties in RTS games. Experimental results are reported at the end of this paper, showing that our approach outperforms an existing approach in terms of win rates in this domain.
Parallelization of the alpha-beta algorithm on distributed computing environments is a promising way of improving the playing strength of computer game programs. Search programs should predict and concentrate the effort on the subtrees that will not be pruned. Unlike in sequential search, when subtrees are explored in parallel, their results are obtained asynchronously. Using such information dynamically should allow better prediction of subtrees that are never pruned. We have implemented a parallel game tree search algorithm performing such dynamic updates on the prediction. Two kinds of game trees were used in performance evaluation: synthetic game trees and game trees generated by a state-of-the-art computer player of shogi (Japanese chess). On a computer cluster with 1, 536 cores, dynamic updates actually show significant performance improvements, which are more apparent in game trees generated by the shogi program for which the initial prediction is less accurate. The speedup nevertheless remains sublinear. A performance model built through analyses of the results reasonably explains the results.
Type: Special Issue on Collaboration Technologies and Network Services that Concentrate Wisdom toward Future Society
Subject area: Web Intelligence
2015 Volume 23 Issue 1 Pages
Published: 2015 Released: January 15, 2015
In this paper, we propose a method for assessing quality values of Wikipedia articles from edit history using h-index. One of the major methods for assessing Wikipedia article quality is a peer-review based method. In this method, we assume that if an editor's texts are left by the other editors, the texts are approved by the editors, then the editor is decided as a good editor. However, if an editor edits multiple articles, and the editor is approved at a small number of articles, the quality value of the editor deeply depends on the quality of the texts. In this paper, we apply h-index, which is a simple but resistant to excessive values, to the peer-review based Wikipedia article assessment method. Although h-index can identify whether an editor is a good quality editor or not, h-index cannot identify whether the editor is a vandal or an inactive editor. To solve this problem, we propose p-ratio for identifying which editors are vandals or inactive editors. From our experiments, we confirmed that by integrating h-index with p-ratio, the accuracy of article quality assessment in our method outperforms the existing peer-review based method.
The online social media such as Facebook, Twitter and YouTube has been used extensively during disaster and emergency situation. Despite the advantages offered by these services on supplying information in vague situation by citizen, we raised the issue of spreading misinformation on Twitter by using retweets. Accordingly, in this study, we conduct a user survey (n=133) to investigate what is the user's action towards spread message in Twitter, and why user decide to perform retweet on the spread message. As the result of the factor analyses, we extracted 3 factors on user's action towards spread message which are: 1) Desire to spread the retweet messages as it is considered important, 2) Mark the retweet messages as favorite using Twitter “Favorite” function, and 3) Search for further information about the content of the retweet messages. Then, we further analyze why user decides to perform retweet. The results reveal that user has desire to spread the message which they think is important and the reason why they retweet it is because of the need to retweet, interesting tweet content and the tweet user. The results presented in this paper provide an understanding on user behavior of information diffusion, with the aim to reduce the spread of misinformation using Twitter during emergency situation.
As distributed computing becomes part of the daily life of an expressive number of people, it becomes important to rethink the way we express compatibility between the components of distributed systems. This paper proposes a mechanism to check service compatibility based on service contracts. We propose that a contract should be specified in terms of a process calculus and that interacting services should have their algorithms verified against such contracts. This way, we can formally check if they can reach a target state, meaning that they can successfully interact. In order to guide the compatibility check we propose a variation of the Java programming language to create a Domain-Specific Language (DSL). This DSL, along with a run time model, was specially designed to allow for an automated examination of behavior in a message-oriented middleware environment. We provide a qualitative evaluation of our proposal through the analysis of an example involving the dynamic creation of interconnections.
In this paper, we show the development of resource management server to enable production Cloud services easily based on OpenStack. In recent days, Cloud computing technologies have progressed and many providers have started Cloud services. Some providers use proprietary systems but others use open source IaaS software such as OpenStack and CloudStack. Because the community of OpenStack development is very active, we expect OpenStack will become a de facto standard open source IaaS software. Because OpenStack target is providing primitive APIs for IaaS control, there are some problems to use OpenStack as it is for production services. For example, there are some problems that logical/virtual resources CRUD transactions are insufficient, nova-scheduler which determines hypervisors for virtual machines deployment does not consider operators business requirements and logical checks of unsuitable API calls are insufficient. Therefore, we propose a resource management server which manages physical resources and logical/virtual resources to enable production IaaS services easily based on OpenStack. The resource management server mediates users and OpenStack, provides added actions such as logical checks of API calls, multiple API combination uses, scheduling logic of hypervisors for virtual machines. We implemented the proposed resource management server and showed that operators can operate reliable IaaS services without conscious of OpenStack problems. Furthermore, we measured the performance of multiple API combination uses and showed our method could reduce users waiting time of image deployment or image extraction from volume.
In this paper, we propose an accelerated algorithm for solving the shortest vector problem (SVP). We construct our algorithm by using two novel ideas, i.e., the choice of appropriate distributions of the natural number representation and the reduction of the sum of the squared lengths of the Gram-Schmidt orthogonalized vectors. These two ideas essentially depend on statistical analysis. The first technique is to generate lattice vectors expected to be short on a particular distribution of natural number representation. We determine the distribution so that more very short lattice vectors have a chance to be generated while lattice vectors that are unlikely to be very short are not generated. The second technique is to reduce the sum of the squared lengths of the Gram-Schmidt orthogonalized vectors. For that, we restrict the insertion index of a new lattice vector. We confirmed by theoretical and experimental analysis that the smaller the sum is, the more frequently a short lattice vector tends to be found. We solved an SVP instance in a higher dimension than ever, i.e., dimension 132 using our algorithm.
VoIP/SIP is taking place of conventional telephony because of very low call charge but it is also attractive for SPITters who advertise or spread phishing calls toward many callees. Although there exist many feature-based SPIT detection methods, none of them provides the flexibility against multiple features and thus complex threshold settings and training phases cannot be avoided. In this paper, we propose an unsupervised and threshold-free SPITters detection scheme based on a clustering algorithm. Our scheme does not use multiple features directly to trap SPITters but uses them to find the dissimilarity among each caller pair and tries to separate the callers into a SPITters cluster and a legitimate one based on the dissimilarity. By computer simulation, we show that the combination of Random Forests dissimilarity and PAM clustering brings the best classification accuracy and our scheme works well when the SPITters account for more than 20% of the entire caller.