Information and Media Technologies
Online ISSN : 1881-0896
ISSN-L : 1881-0896
Volume 7, Issue 3
Displaying 1-43 of 43 articles from this issue
Computing
  • Mahito Sugiyama, Kentaro Imajo, Keisuke Otaki, Akihiro Yamamoto
    2012 Volume 7 Issue 3 Pages 928-937
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    To date, enormous studies have been devoted to investigate biochemical functions of receptors, which have crucial roles for signal processing in organisms. Ligands are key tools in experiments since receptor specificity with respect to them enables us to control activity of receptors. However, finding ligands is difficult; choosing ligand candidates relies on expert knowledge of biologists and conducting test experiments in vivo or in vitro has a high cost. Here we investigate the ligand finding problem with a machine learning approach by formalizing the problem as multi-label classification mainly discussed in the area of preference learning. We develop in this paper a new algorithm LIFT (Ligand FInding via Formal ConcepT Analysis) for multi-label classification, which can treat ligand data in databases in a semi-supervised manner. The key to LIFT is to achieve clustering by putting an original dataset on lattices using the data analysis technique of Formal Concept Analysis (FCA), followed by obtaining the preference for each label using the lattice structure. Experiments using real data of ligands and receptors in the IUPHAR database show that LIFT effectively solves the task compared to other machine learning algorithms.
    Download PDF (319K)
  • Yuki Hasegawa, Yoshinao Isobe, Kazuhito Ohmaki, Hideki Mori, Kensei Ts ...
    2012 Volume 7 Issue 3 Pages 938-948
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Communicating Sequential Processes (CSP) based architecture is regarded as a useful method in the development of concurrent embedded systems. Products around us are embedded in many computer systems. Concurrent processing by software is necessary in multi-core and multi-processor environments to make more effective use of hardware resources. There is strong demand for hierarchy, resource constraints, and safety for implementation of embedded systems. We implemented a sorting model as a concurrent system in an experiment. We tried to design, implement, and verify concurrent sorting model with CSP based architecture. In this study, we try to parallelize of sorting as the subject of embedded systems for implementing. Because sorting has been widely studied, it is suitable as the subject of parallelization. We also evaluated the system. We will consider the usefulness of CSP, which we present in this paper, using examples of development.
    Download PDF (1698K)
  • Jasmine A. Malinao, Richelle Ann B. Juayong, Rona May U. Tadlas, Jhoir ...
    2012 Volume 7 Issue 3 Pages 949-955
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In this paper, a new set of m-dimensional Power Spectrum-based data signatures is derived to obtain better Vector Fusion 2-dimensional visualizations of a time series and periodic n-dimensional traffic data set as compared with visualizations produced from using the entire set of n-dimensional Power Spectrum representations in literature, where m « n. We were able to ascertain that 4-dimensional data signatures provide empirically optimal representations with respect to the data set used. We have achieved ≈ 97.6% reduction in terms of data representation of the original nD data set with the signatures. We propose an algorithm that determines how good the selected set of m-dimensional signatures represents the n-dimensional data set in 2 dimensions in quantitative terms. We use the Vector Fusion visualization algorithm in transforming each signature from m dimensions into 2 dimensions. An improved set of qualitative criterion is drawn to measure the goodness of the 2-dimensional data signature-based visual representation of the original n-dimensional data set. Finally, we provide empirical testing, discuss the results, and conclude the contributions of the proposed methods.
    Download PDF (1447K)
  • Yasunori Shiono, Tomokazu Arita, Youzou Miyadera, Kimio Sugita, Takeo ...
    2012 Volume 7 Issue 3 Pages 956-966
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Various forms of tables have been used as tools for visualizing and arranging information in many fields. In addition, XML is widely used as a language for exchanging data. We have studied how documents are formally processed with software development tools. In this paper, we propose a system to create and manage tabular specifications based on an attribute graph grammar. A tabular form specification is represented by a marked graph, and its syntax is defined by an attribute NCE graph grammar. We add a new attribute that contains XML source codes of the tabular form specifications. The XML source codes are generated by evaluating the attribute and are automatically registered to the database. The specifications are then retrieved from the database. Our system can perform a characteristic retrieval for software specifications. The results may lead to a considerable improvement in the efficiency of human labor due to the use of a unified formal methodology based on graph theory and advanced retrieval.
    Download PDF (1195K)
  • Satoshi Takahashi, Tokuro Matsuo
    2012 Volume 7 Issue 3 Pages 967-972
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    This paper proposes a new B2B electronic commerce model from bidding information in double auctions. In B2B electronic commerce, buyers try to purchase multiple items at the same time, since a buyer develops something products by using purchased items. Also suppliers have an incentive of making coalitions, since buyers want to purchase multiple items in the model. A mechanism designer has to consider an optimal mechanism which calculates an optimal matching between buyers and suppliers. To find an optimal matching is very hard, since a mechanism calculates all combinations between buyers and suppliers. Consequently, we propose a calculation method which has two steps; first the mechanism determines winners of buyers' side, second the mechanism determines coalitions and winners of suppliers by using the result of buyers' side. This paper also discusses the improved method with dynamical mechanism design by using the bidding information. Advantages of this paper are that each d eveloper can procure the components to develop a certain item and tasks are allocated to suppliers effectively. The previous result of auction data can be available to shorten the period of winner determinations. Contribution of this paper includes two parts. One is creating a mathematical model of procurement auction, which is able to apply to practical situation. The other is proposing dynamic mechanism for the procurement auction.
    Download PDF (216K)
  • Hideki Tsuiki, Yohei Yokota
    2012 Volume 7 Issue 3 Pages 973-977
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    We consider three-dimensional extensions of the Sudoku puzzle over prefractal objects. The prefractal objects we use are 2nd-level cubic approximations of two 3D fractals. Both objects are composed of 81 cubic pieces and they have 9 × 9-grid appearances in three orthogonal directions. On each object, our problem is to assign a digit to each of the 81 pieces so that it has a Sudoku solution pattern in each of the three 9 × 9-grid appearances. In this paper, we present an algorithm for enumerating such assignments and show the results.
    Download PDF (389K)
  • Shane Dye, Nicola Ward Petty
    2012 Volume 7 Issue 3 Pages 978-985
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Rogo® is a new type of mathematical puzzle, invented in 2009. Rogo is a prize-collecting subset-selection TSP on a grid. Grid squares can be blank, forbidden, or show a reward value. The object is to accumulate the biggest score using a given number of steps in a loop around the grid. This paper introduces Rogo as a discrete optimisation problem. An IP formulation is given for the problem with two alternative sets of subtour elimination constraints. Enumeration-based algorithms are also proposed based on properties of solutions and Rogo instances. Some results of computational experiments are reported.
    Download PDF (938K)
  • Sascha Kurz
    2012 Volume 7 Issue 3 Pages 986-991
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    We consider so-called “squaring the square” puzzles where a given square (or rectangle) should be dissected into smaller squares. For a specific instance of such problems we demonstrate that a mathematically rigorous solution can be quite involved. As an alternative to exhaustive enumeration using tailored algorithms we describe the general approach of formulating the problem as an integer linear program.
    Download PDF (218K)
  • Yoshifumi Manabe, Tatsuaki Okamoto
    2012 Volume 7 Issue 3 Pages 992-999
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    This paper discusses cake-cutting protocols when the cake is a heterogeneous good, represented by an interval on the real line. We propose a new desirable property, the meta-envy-freeness of cake-cutting, which has not been formally considered before. Meta-envy-free means there is no envy on role assignments, that is, no party wants to exchange his/her role in the protocol with the one of any other party. If there is an envy on role assignments, the protocol cannot be actually executed because there is no settlement on which party plays which role in the protocol. A similar definition, envy-freeness, is widely discussed. Envy-free means that no player wants to exchange his/her part of the cake with that of any other player's. Though envy-freeness was considered to be one of the most important desirable properties, envy-freeness does not prevent envy about role assignment in the protocols. We define meta-envy-freeness to formalize this kind of envy. We propose that simultaneously achieving meta-envy-free and envy-free is desirable in cake-cutting. We show that current envy-free cake-cutting protocols do not satisfy meta-envy-freeness. Formerly proposed properties such as strong envy-free, exact, and equitable do not directly consider this type of envy and these properties are very difficult to realize. This paper then shows cake-cutting protocols for two and three party cases that simultaneously achieves envy-free and meta-envy-free. Last, we show meta-envy-free pie-cutting protocols.
    Download PDF (207K)
  • Jonas Kölker
    2012 Volume 7 Issue 3 Pages 1000-1012
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In a Kurodoko puzzle, one must colour some squares in a grid black in a way that satisfies non-overlapping, non-adjacency, reachability and numeric constraints specified by the numeric clues in the grid. We show that deciding the solvability of Kurodoko puzzles is NP-complete.
    Download PDF (238K)
  • Jonas Kölker
    2012 Volume 7 Issue 3 Pages 1013-1014
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In a Magnets puzzle, one must pack magnets in a box subject to polarity and numeric constraints. We show that deciding solvability of Magnets instances is NP-complete.
    Download PDF (109K)
  • Jonas Kölker
    2012 Volume 7 Issue 3 Pages 1015-1018
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In a Slither Link puzzle, the player must draw a cycle in a planar graph, such that the number of edges incident to a set of clue faces equals the set of given clue values. We show that for a number of commonly played graph classes, the Slither Link puzzle is NP-complete.
    Download PDF (229K)
  • Tetsuo Asano, Erik D. Demaine, Martin L. Demaine, Ryuhei Uehara
    2012 Volume 7 Issue 3 Pages 1019-1024
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Kaboozle is a puzzle consisting of several square cards, each annotated with colored paths and dots drawn on both sides and holes drilled. The goal is to join two colored dots with paths of the same color (and fill all holes) by stacking the cards suitably. The freedoms here are to reflect, rotate, and order the cards arbitrarily, so it is not surprising that the problem is NP-complete (as we show). More surprising is that any one of these freedoms — reflection, rotation, and order — is alone enough to make the puzzle NP-complete. Furthermore, we show NP-completeness of a particularly constrained form of Kaboozle related to 1D paper folding. Specifically, we suppose that the cards are glued together into a strip, where each glued edge has a specified folding direction (mountain or valley). This variation removes the ability to rotate and reflect cards, and restricts the order to be a valid folded state of a given 1D mountain-valley pattern.
    Download PDF (396K)
  • Kevin Buchin, Maike Buchin
    2012 Volume 7 Issue 3 Pages 1025-1028
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In a rolling block maze, one or more blocks lie on a rectangular board with square cells. In most mazes, the blocks have size k × m × n where k, m, n are integers that determine the size of the block in terms of units of the size of the board cells. The task of a rolling block maze is to roll a particular block from a starting to an ending placement. A block is rolled by tipping it over one of its edges. Some of the squares of the board are marked as forbidden to roll on. We show that solving rolling block mazes is PSPACE-complete.
    Download PDF (137K)
  • Kenichiro Nakai, Yasuhiko Takenaga
    2012 Volume 7 Issue 3 Pages 1029-1032
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Pandemic is a multi-player board game which simulates the outbreak of epidemics and the human effort to prevent them. It is a characteristic of this game that all the players cooperate for a goal and they are not competitive. We show that the problem to decide if the player can win the generalized Pandemic from the given situation of the game is NP-complete.
    Download PDF (266K)
  • Kazuya Haraguchi, Yasutaka Abe, Akira Maruoka
    2012 Volume 7 Issue 3 Pages 1033-1043
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    We propose a framework that yields instances of certain combinatorial puzzles. To explore such a framework, we focus on certain types of puzzles that ask an assignment of numbers to the cells of an n × n grid so that it satisfies certain constraints as well as the Latin square condition, that is, each row and column contains all of the numbers in {1, 2,...,n}. Our algorithm based on the framework automatically yields puzzle instances whose difficulties to solve can be adjusted by means of puzzle inference rules built into the algorithm. Taking up BlockSum puzzle for example, we performed experiments to demonstrate that, as is expected, human solvers tend to solve puzzle instances correctly that are produced with easy inference rules, whereas they tend to fail to solve those produced with sophisticated rules.
    Download PDF (372K)
  • Takuro Kutsuna, Shuichi Sato, Naoya Chujo
    2012 Volume 7 Issue 3 Pages 1044-1050
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    The increasing complexity of embedded systems in information and communication technology causes a problem with locating faults during system failures. One reason for this problem is that system components that receive abnormal input data from other components may also output abnormal data, even if they are not in abnormal states, and consequently many redundant faults are detected in the system. In this paper, we present a diagnosis method for locating the origin of faults automatically in systems where fault propagation may occur. We use a model-based diagnosis scheme and abstract behavior modeling technique to deal with complex software components. We propose a new approach to diagnose systems that have data flow loops. Finally, we propose a one-stage approach for solving the abstract model-based diagnosis based on its formulation into the partial maximum satisfiability problem.
    Download PDF (221K)
  • Amang Sudarsono, Toru Nakanishi, Nobuo Funabiki
    2012 Volume 7 Issue 3 Pages 1051-1061
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    To enhance user privacy, anonymous credential systems allow the user to convince a verifier of the possession of a certificate issued by the issuing authority anonymously. The typical application is the privacy-enhancing electronic ID (eID). Although a previously proposed system achieves the constant complexity in the number of finite-set attributes of the user, it requires the use of RSA. In this paper, we propose a pairing-based anonymous credential system excluding RSA that achieves the constant complexity. The key idea of our proposal is the adoption of a pairing-based accumulator that outputs a constant-size value from a large set of input values. Using zero-knowledge proofs of pairing-based certificates and accumulators, any AND and OR relation can be proved with the constant complexity in the number of finite-set attributes. We implement the proposed system using the fast pairing library, compare the efficiency with the conventional systems, and show the practicality in a mobile eID application.
    Download PDF (370K)
  • Masaharu Fukase, Kazunori Yamaguchi
    2012 Volume 7 Issue 3 Pages 1062-1072
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    The problem of finding a lattice vector approximating a shortest nonzero lattice vector (approximate SVP) is a serious problem that concerns lattices. Finding a lattice vector of the secret key of some lattice-based cryptosystems is equivalent to solving some hard approximate SVP. We call such vectors very short vectors (VSVs). Lattice basis reduction is the main tool for finding VSVs. However, the main lattice basis reduction algorithms cannot find VSVs in lattices in dimensions ∼200 or above. Exhaustive search can be considered to be a key technique toward eliminating the limitations with current lattice basis reduction algorithms. However, known methods of carrying out exhaustive searches can only work in relatively low-dimensional lattices. We defined the extended search space (ESS) and experimentally confirmed that exhaustive searches in ESS make it possible to find VSVs in lattices in dimensions ∼200 or above with the parameters computed from known VSVs. This paper presents an extension of our earlier work. We demonstrate the practical effectiveness of our technique by presenting a method of choosing the parameters without known VSVs. We also demonstrate the effectiveness of distributed searches.
    Download PDF (318K)
  • Sho Tsugawa, Hiroyuki Ohsaki, Makoto Imase
    2012 Volume 7 Issue 3 Pages 1073-1082
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we propose a method to prioritize email using a trust network among users for supporting email triage, and evaluate its effectiveness with extensive experiments. In recent years, the amount of email received by individuals has increased, and therefore the time required for email triage (i.e., the process of going through unhandled email messages and deciding what to do with them) has therefore been increasing. Golbeck et al. proposed TrustMail, a prototype email client that prioritizes email in user's mailbox using a trust network (i.e., a social network representing trust relationships among users). In this paper, we extend the TrustMail concept to allow message-based email prioritization using inter-recipient trust, which is inferred trust score from the recipient to other recipients. We propose a method called EMIRT (Estimating Message Importance from inter-Recipient Trust) for enabling message-based prioritization. Through extensive experiments utilizing two email datasets, we quantitatively evaluate the effectiveness of EMIRT for email prioritization. Our experimental results show that EMIRT is effective for email prioritization. Specifically, our results show that EMIRT achieves significantly higher recall and precision than TrustMail in both email datasets and that EMIRT hardly gives low scores to urgently replied email (i.e., EMIRT achieves a very low false negative).
    Download PDF (1046K)
Media (processing) and Interaction
  • Minh Hai Nguyen, Kiyoaki Shirai
    2012 Volume 7 Issue 3 Pages 1083-1108
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    It is said that Vietnamese is a language with highly ambiguous words. However, there has been no published Word Sense Disambiguation (WSD hereafter) research on this language. This current research is the first attempt to study Vietnamese WSD. Especially, we would like to explore the effective features for training WSD classifiers and verify the applicability of the ‘pseudoword’ technique to both investigating effectiveness of features and training WSD classifiers. Three tasks have been conducted, using two corpora which were built manually based on Vietnamese Treebank and automatically by applying pseudowords technique. Experiment results showed that Bag-Of-Word feature performs well for all three categories of words (verbs, nouns, and adjectives). However, its combination with POS, Collocation or Syntactic features can not significantly improve the performance of WSD classifiers. Moreover, the experiment results confirmed that pseudoword is a suitable technique to explore the effectiveness of features in disambiguation of Vietnamese verbs and adjectives. Furthermore, we empirically evaluated the applicability of the pseudoword technique as an unsupervised learning method for real Vietnamese WSD.
    Download PDF (424K)
  • Radim Tylecek, Radim Šára
    2012 Volume 7 Issue 3 Pages 1109-1116
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    We present a method for recognition of structured images and demonstrate it on the detection of windows in facade images. Given an ability to obtain local low-level data evidence on primitive elements of a structure (like window in a facade image), we determine their most probable number, attribute values (location, size) and neighborhood relation. The embedded structure is weakly modeled by pair-wise attribute constraints, which allow structure and attributes to mutually support each other. We use a very general framework of reversible jump MCMC, which allows simple implementation of a specific structure model and plug-in of almost arbitrary element classifiers. We have chosen the domain of window recognition in facade images to demonstrate that the result is an efficient algorithm achieving performance of other strongly informed methods for regular structures.
    Download PDF (4213K)
  • Myo Thida, How-Lung Eng, Dorothy N. Monekosso, Paolo Remagnino
    2012 Volume 7 Issue 3 Pages 1117-1123
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we propose a new approach for recognizing group events and abnormality detection in a crowded scene. A manifold learning algorithm with temporal-constraints is proposed to embed a video of a crowded scene in a low-dimensional space. Our low dimensional representation of a video preserves the spatial temporal property of a video as well as the characteristic of the video. Recognizing video events and abnormality detection in a crowded scene is achieved by studying the video trajectory in the manifold space. We evaluate our proposed method on the state-of-the-art public data-sets containing different crowd events. Qualitative and quantitative results show the promising performance of the proposed method.
    Download PDF (1297K)
  • Hajime Morita, Tetsuya Sakai, Manabu Okumura
    2012 Volume 7 Issue 3 Pages 1124-1129
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    We propose a new method for query-oriented extractive multi-document summarization. To enrich the information need representation of a given query, we build a co-occurrence graph to obtain words that augment the original query terms. We then formulate the summarization problem as a Maximum Coverage Problem with Knapsack Constraints based on word pairs rather than single words. Our experiments with the NTCIR ACLIA question answering test collections show that our method achieves a pyramid F3-score of up to 0.313, a 36% improvement over a baseline using Maximal Marginal Relevance.
    Download PDF (233K)
  • Jiyi LI, Qiang MA, Yasuhito ASANO, Masatoshi YOSHIKAWA
    2012 Volume 7 Issue 3 Pages 1130-1135
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Social image hosting websites, such as Flickr, have a rapid growth recently. Content based image retrieval on such websites is an useful potential service but is still unavailable because its performance is unsatisfactory. We propose a multi modal relevance feedback (MMRF) scheme and a supervised re-ranking approach based on it to improve the performance for practical application. Our multi modal scheme utilizes both image and social tag relevance feedback instances. The approach propagates visual and textual information as well as multi modal relevance feedback information on the graph with a mutual reinforcement process. We conduct experiments based on real world data from Flickr to evaluate the performance of our approach. We also conduct an experiment to show that our multi modal relevance feedback scheme significantly improves performance compared with traditional single modal relevance feedback (SMRF) scheme.
    Download PDF (412K)
  • Ramon Francisco Mejia, Yuichi Kaji, Hiroyuki Seki
    2012 Volume 7 Issue 3 Pages 1136-1144
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Monochrome two-dimensional barcodes are rapidly becoming a de-facto standard for distributing digital data through printed medium because of their small cost and portability. Increasing the data density of these barcodes improves the flexibility and effectiveness of existing applications of barcodes, and has the potential to create novel means for data transmission and conservation. However, printing and scanning equipment introduce uncontrollable effects on image at very refined level, and the effects are beyond the scope of the error control mechanisms of existing barcode schemes. To realize high-density barcodes, it is essential to develop novel symbology and error control mechanisms which can manage these kinds of effects and provide practical reliability. To tackle this problem, this paper studies the communication channel defined by high-density barcodes, and proposes several error control techniques to increase the robustness of the barcode scheme. Some of these techniques convert the peculiar behavior of printing equipment to the well-studied model of additive white Gaussian (AWGN) channel. The use of low-density parity codes is also investigated, as they perform much better than conventional Reed-Solomon codes especially for AWGN channels. Through experimental evaluation, it is shown that the proposed error control techniques can be essential components in realizing high-density barcodes.
    Download PDF (1184K)
  • Futoshi Sugimoto, Makoto Murakami, Chieko Kato
    2012 Volume 7 Issue 3 Pages 1145-1150
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In this study, the process to synthesize a human face based on the information of words is defined as a mapping from a word space, which is composed of the words expressing dimensions and shape of facial elements, into a physical model space where physical shape of the facial elements are formed. By introducing a concept of mapping, the use of whole words existing in the word space makes it possible to synthesize a human face based on free and uninhibited description. Furthermore, we have only to make 3-dimensinal physical models corresponding to the words that are selected as training data to identify a mapping function. The others are made through the mapping. Finally, we inspect the validity of the mapping function that is obtained in this study.
    Download PDF (1109K)
  • Masaharu Hirota, Naoki Fukuta, Shohei Yokoyama, Hiroshi Ishikawa
    2012 Volume 7 Issue 3 Pages 1151-1161
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Although metadata are useful to obtain better clustering results on image clustering, some images do not have social tags or metadata about photo-taking conditions. In this paper, we propose an image clustering method that is robust for those missing metadata of photo images that appear in search results on the Web. The method has an integrated estimation mechanism for missing social tags or photo-taking conditions from other images in the image search result. An advantage of our method is that our approach does not require another training set that is constructed from other images that are not included in the search result. We demonstrate that the proposed method can effectively cluster images which have some missing metadata by showing the performance of on-demand clustering on a photo sharing site.
    Download PDF (2911K)
  • Yuichi Murakami, Shingo Nakamura, Shuji Hashimoto
    2012 Volume 7 Issue 3 Pages 1162-1172
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In most article retrieval systems using Kansei words there exists a gap between user's Kansei and the system's Kansei model. Therefore, it is not always easy to retrieve the desirable articles. The purpose of this paper is to bridge this gap not to put a strain on users by combining the recommendation function and interaction design with four features. First, users can retrieve intuitively as the system visualizes retrieval space consisting of a torus type SOM (Self Organizing Maps). Second, users can find the most desirable article in any case by elimination methods to delete undesirable articles pointed by the user. Third, neural networks in the system learn user's Kansei based on the most desirable article to improve the retrieval accuracy. Fourth, users can search articles by arbitrary Kansei words, and can edit retrieval criteria as they please. In the evaluation experiments, the authors took actual paintings as the articles, and evaluated usability (effectiveness, efficiency and satisfaction), novelty and serendipity. These results were led by the synergetic effects of the recommendation function and interaction design.
    Download PDF (1884K)
  • Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii
    2012 Volume 7 Issue 3 Pages 1173-1179
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    This paper attempts to activate a large scale shopping district (shotengai) using new internet techniques. Recently decline of shotengai is a serious problem by development of large shopping centers. We made a new approach with internet techniques to activate shotengai, which is a typical Japanese shopping district. The Osu Shotengai is one of the most famous shotengai in Nagoya, Japan, which includes about 400 stores. We developed Osu shotengai official web site, called “At Osu.” First, the information of 400 stores in Osu shotengai, which includes 9 streets, was collected. Then we created an interactive “Information Visualization System” to put fresh information of shotengai on the web site in real time. It includes “Comment Upload System, ” where store owners can upload their comments and informing news directly on the web site. Further, we developed a new approach to stimulate store owners motivations for participating in the web site. And we also mention about an attractive and interactive web design using twitters to get opinions of users. By developing the new web site, the number of visitors of “At Osu” has increased rapidly. Many articles about this new approach to activate shotengai with a web site were published in newspapers or magazines and we have receives many inquiries.
    Download PDF (2555K)
  • Shohei Hido, Shoko Suzuki, Risa Nishiyama, Takashi Imamichi, Rikiya Ta ...
    2012 Volume 7 Issue 3 Pages 1180-1191
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Current patent systems face a serious problem of declining quality of patents as the larger number of applications make it difficult for patent officers to spend enough time for evaluating each application. For building a better patent system, it is necessary to define a public consensus on the quality of patent applications in a quantitative way. In this article, we tackle the problem of assessing the quality of patent applications based on machine learning and text mining techniques. For each patent application, our tool automatically computes a score called patentability, which indicates how likely it is that the application will be approved by the patent office. We employ a new statistical prediction model to estimate examination results (approval or rejection) based on a large data set including 0.3 million patent applications. The model computes the patentability score based on a set of feature variables including the text contents of the specification documents. Experimental results showed that our model outperforms a conventional method which uses only the structural properties of the documents. Since users can access the estimated result through a Web-browser-based GUI, this system allows both patent examiners and applicants to quickly detect weak applications and to find their specific flaws.
    Download PDF (702K)
  • Haruyuki Iwama, Yasushi Makihara, Yasushi Yagi
    2012 Volume 7 Issue 3 Pages 1192-1204
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    The importance of person identification techniques is increasing for visual surveillance applications. In social living scenarios, people often act in groups composed of friends, family, and co-workers, and this is a useful cue for person identification. This paper describes a method for person identification in video sequences based on this group cue. In the proposed approach, the relationships between the people in an input sequence are modeled using a graphical model. The identity of each person is then propagated to their neighbors in the form of message passing in a graph via belief propagation, depending on each person's group affiliation information and their characteristics, such as spatial distance and velocity vector difference, so that the members of the same group with similar characteristics enhance each other's identities as group members. The proposed method is evaluated through gait-based person identification experiments using both simulated and real input sequences. Experimental results show that the identification performance is considerably improved when compared with that of the straightforward method based on the gait feature alone.
    Download PDF (2712K)
Computer Networks and Broadcasting
  • Tomoo Sumida, Yasunori Shiono, Takaaki Goto, Katsuyoshi Ito, Kensei Ts ...
    2012 Volume 7 Issue 3 Pages 1205-1212
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    We describe a protocol named Non-ordered Block Transfer on Explicit Multi-Unicast (NBT on XCAST). The purpose of this protocol is to save network bandwidth and to reduce server load in delivering large-capacity stored files. In general, files are transfered on a unicast basis. However, the unicast wastes bandwidth by sending the same large-capacity files when responding to each client's requests. In addition, server load increases. To solve this problem, Non-ordered Block Transfer (NBT) has been proposed as an efficient transfer system for large-capacity stored files using asynchronous transfer mode (ATM) multicast. However, NBT with ATM has not been implemented and has a limited area of use. In this paper, to solve this problem we propose an NBT protocol using XCAST (NBT on XCAST), which is proposed as a multicast scheme. XCAST can be implemented in a pure IP network without limitation of the data link layer protocol. Our proposed system is especially effective when the server frequently transfers large-capacity stored files. We discuss bandwidth consumption efficiency, a retransmission scheme, and experimental results of a partial performance using NBT on XCAST.
    Download PDF (412K)
  • Naoki Fukuta
    2012 Volume 7 Issue 3 Pages 1213-1219
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Peer-to-peer (P2P) data sharing is a valuable approach for sharing data among people when they are belonging to different institutions. There are strong demands on both flexible, high-precision search and protection of privacy at peer-to-peer data retrievals. Especially, it is demanded for searching relevant files in P2P environment by using metadata while the terms in metadata that are used in such queries and annotations include some private information. In this paper, I present a mechanism and an analysis of P2P-based semantic file sharing and retrieval that uses mobile agents. The mechanism enables us to utilize private ontologies for flexible concept-oriented semantic searches without loss of privacy in processing semantic matching among private metadata of files and the requested semantic queries. The private ontologies are formed on a certain reference ontology with differential ontologies for personalization. In my approach, users can manage and annotate their files with their own private ontologies. Reference ontologies are used to find out semantically relevant files for the given queries that include semantic relations among existing files and the requested files. Mobile agent approach is applied for both implementing a system with less use of network bandwidth and coding it into a set of simple and small programs. I show the effectiveness of the use of private ontologies in metadata-based file retrieval. Also I show that the mobile agent approach has somewhat less overhead in execution time when the network latency is relatively high, while it is small enough even when the network is ideally fast.
    Download PDF (1589K)
  • Mingmei Li, Naoki Imai, Kiyohito Yoshihara
    2012 Volume 7 Issue 3 Pages 1220-1227
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Non-line-of-sight (NLOS) signal propagation is the major source of error in wireless Time Difference of Arrival (TDOA) indoor systems. Even if enough landmarks (LMs) can be deployed by the service providers or system designers, the existing methods experience the location accuracy problem when few line-of-sight (LOS) measurements can be detected. Under the condition that the total number of TDOA measurements satisfies the minimum triangulation requirements (≥ 3), this paper proposes a new integrated location method. The proposed method integrates the user's step length and count from the built-in sensors of mobile phone into the wireless TDOA location systems. Firstly, it detects LOS/NLOS measurements using the user's step length and count. Secondly, it integrates the previous location and step length as supplementary LM and measurement to estimate the location, when two LOS measurements are detected. Simulation results show that the proposed method detects LOS/NLOS measurements at a higher ratio. Performance of location errors depends on the number of detected LOS measurements. The proposed method achieves lower location errors when two LOS measurements are detected.
    Download PDF (528K)
  • Nobuharu Kami, Teruyuki Baba, Satoshi Ikeda, Takashi Yoshikawa, Hiroyu ...
    2012 Volume 7 Issue 3 Pages 1228-1237
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Most current algorithms compare spatial/temporal variables with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori, and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, they do not often scale in response to increase in system size since direct distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality-sensitive hashing. Theoretical analysis and evaluations show that significant locations are accurately detected with a loose parameter setting even under high noise levels.
    Download PDF (752K)
Information Systems and Applications
  • Hark-Jin Lee, Young-Sung Son, Jun-Hee Park, Kyeong-Deok Moon, Jae-Cheo ...
    2012 Volume 7 Issue 3 Pages 1238-1243
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we investigate an integrated architecture to support interoperability among heterogeneous middlewares on home networks. We propose and implement a general middleware bridge to support device interoperability on different middlewares for efficient home networks. The IWFEngine (interworking function engine) architecture provides an interface for identifying and utilizing services among devices using simple rules in other to support interoperability among heterogeneous middlewares. Through the registered rules, local middleware messages are translated into standard messages, and vice versa. Unlike existing integrated middleware architectures, the IWFEngine architecture improves the efficiency, and a convenient adaptor development is possible through simple rules and by using local middleware messages. By this configuration, a conversion rule for exchanging messages between devices on various middlewares is described which does not require the modification of the corresponding middleware, and operations can be performed in accordance with the existing corresponding middleware mechanism. Finally, the overhead incurred by a centralized and integrated middleware architecture can be reduced by distributing adaptors into multiple devices.
    Download PDF (634K)
  • Haruhisa Hasegawa, Noriaki Kamiyama, Hideaki Yoshino
    2012 Volume 7 Issue 3 Pages 1244-1251
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    While information and communication technologies (ICT) are expected to improve energy efficiency, the spread of ICT will also increase power consumption due to higher server loads and increased network traffic. This paper presents a traffic management scheme to control traffic volume and server load by using a virtual machine (VM) in cloud architecture. The proposed scheme solves the inefficiency concerning the difference in number between content that is distributed and that is actually played back. As a result, this scheme reduces network traffic volume by suppressing unnecessary content distribution. This scheme also efficiently reduces total server load by concentrating the load of content being played back onto a smaller number of VMs. We evaluated the load reduction achieved with our scheme and found that the load for distributing content was drastically reduced. Our proposed scheme is expected to contribute to achieving a low-carbon society in the effort to reduce global warming.
    Download PDF (594K)
  • Takuya Nishikawa, Satoshi Fujita
    2012 Volume 7 Issue 3 Pages 1252-1258
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Peer-to-Peer (P2P) systems have attracted considerable attention in recent years, as a key technology to realize scalable, dependable network services. However, because of its high anonymity, P2P systems involve several drawbacks such as the weakness against malicious attacks by anonymous peers. In this paper, we propose a method to evaluate the trustworthiness of each peer by explicitly taking into account the accuracy of mutual evaluations. The proposed method is an extension of the EigenTrust proposed by Kamvar et al. which calculates a global trust vector consistent with the observed local trust vectors under the weighted sum in a linear space. The performance of the proposed method is evaluated by simulation. The result of simulations indicates that the proposed method identifies a large subset of reliable peers with a sufficiently small number of message transmissions compared with a simple modification of the EigenTrust.
    Download PDF (358K)
  • Akiya Inoue, Yuuki Takano, Takeshi Kurosawa, Motoi Iwashita, Ken Nishi ...
    2012 Volume 7 Issue 3 Pages 1259-1265
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    This paper presents a mobile-carrier choice modeling framework to analyze customer preference and understand customer choice behavior in the mobile phone market. Due to severe competitive conditions, there are few differences between the mobile phone services provided by mobile-carriers. We propose a new mobile-carrier choice modeling that takes into account incentive factors and restrictive factors as decision-making factors. A Web survey was carried out to obtain the sample data for this model. We show the model estimated from the survey data to analyze mobile-carrier choice behavior.
    Download PDF (1371K)
  • Hiroki Nakagawa, Akihiko Nagai, Takayuki Ito
    2012 Volume 7 Issue 3 Pages 1266-1273
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    In this paper, we propose to use a middle-agent framework to analyze a business model that is focused on distributors for win-win cooperation and collaboration by revealing the effect, the influence, and the requirement for consensus in cooperation and collaboration. Distributors can create good cooperation and collaboration by mediates between manufacturing and user companies. We give an example of the collaborative development of new products where a distributor mediating between maker and user companies. The Application Specific Standard Product (ASSP), which is an LSI for specific applications, is attracting attention. To develop an ASSP, both semiconductor and user companies must agree on the functions that the ASSP has and on how many ASSPs must be considered without disclosing secret information. In this paper, we model distributors in a collaborative development and implement a tool for an agent-based simulation, in which we imagine a market where a product is developed, sold, and bought. We investigate the role of middle agents, distributors and how they affect the market. In addition, we propose a framework for examining a new business model.
    Download PDF (613K)
  • Shigeaki Tanimoto, Masahiko Yokoi, Hiroyuki Sato, Atsushi Kanai
    2012 Volume 7 Issue 3 Pages 1274-1282
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    A ubiquitous ICT environment has rapidly developed through cloud computing, becoming very convenient to use. However, the threats of computer viruses, unauthorized accesses, attacks on servers, etc. have emerged. These threats also occur in the academic environment. Thus, a public key infrastructure (PKI) construction for achieving a safe and secure university ICT environment is desired. The University PKI project is underway at the National Institute of Informatics, and common specifications, such as supply specifications for a campus PKI and certificate policy/certification practice statement guidelines, have been proposed. However, PKIs are still rarely deployed. They generally have a high cost structure, and this has become one of the issues in their spread and promotion. This study quantitatively clarifies the cost structure of a PKI through estimation and actual measurement. This clarification will contribute to the increased use and advancement of a campus PKI.
    Download PDF (5107K)
  • Daisuke Asai, Jarrod Orszulak, Richard Myrick, Chaiwoo Lee, Lisa D'Amb ...
    2012 Volume 7 Issue 3 Pages 1283-1293
    Published: 2012
    Released on J-STAGE: September 15, 2012
    JOURNAL FREE ACCESS
    Aging in place is a sustainable strategy for aging societies all over the world, although there are still various issues to be resolved. One of those issues, the isolation of the elderly, is expected to be tackled by technology. We identify three concepts for designing systems to assist the elderly in communicating with their families: provide trigger for communication, provide control of communication, and effortless communication. We develop the e-Home system on the three concepts. e-Home is a communication system that includes home monitoring; it offers shared sticky notes and video-telephony for communication media while monitoring medication compliance. We conduct a two-month field study of four households, studying e-Home use and its impact on the subjects' communication habits. The results show enhanced communication in all households.
    Download PDF (1257K)
feedback
Top