詳細検索結果
以下の条件での結果を表示する:
全文: "Jedit"
23件中 1-20の結果を表示しています
  • Dotri Quoc, Kazuo Kobori, Norihiro Yoshida, Yoshiki Higo, Katsuro Inoue
    Information and Media Technologies
    2012年 7 巻 4 号 1401-1407
    発行日: 2012年
    公開日: 2012/12/15
    ジャーナル フリー
    In object-oriented programs, access modifiers are used to control the accessibility of fields and methods from other objects. Choosing appropriate access modifiers is one of the key factors for easily maintainable programming. In this paper, we propose a novel analysis method named Accessibility Excessiveness (AE) for each field and method in Java program, which is discrepancy between the access modifier declaration and its real usage. We have developed an AE analyzer - ModiChecker which analyzes each field or method of the input Java programs, and reports the excessiveness. We have applied ModiChecker to various Java programs, including several OSS, and have found that this tool is very useful to detect fields and methods with the excessive access modifiers.
  • 梅垣 宏行
    日本老年医学会雑誌
    2013年 50 巻 1 号 65-67
    発行日: 2013年
    公開日: 2013/08/06
    ジャーナル フリー
  • Woosung JUNG, Eunjoo LEE, Chisu WU
    IEICE Transactions on Information and Systems
    2011年 E94.D 巻 8 号 1575-1589
    発行日: 2011/08/01
    公開日: 2011/08/01
    ジャーナル フリー
    Change history in project revisions provides helpful information on handling bugs. Existing studies on predicting bugs mainly focus on resulting bug patterns, not these change patterns. When a code hunk is copied onto several files, the set of original and copied hunks often need to be consistently maintained. We assume that it is a normal state when all of hunks survive or die in a specific revision. When partial change occurs on some duplicated hunks, they are regarded as suspicious hunks. Based on these assumptions, suspicious cases can be predicted and the project's developers can be alerted. In this paper, we propose a practical approach to detect various change smells based on revision history and code hunk tracking. The change smells are suspicious change patterns that can result in potential bugs, such as partial death of hunks, missed refactoring or fix, backward or late change. To detect these change smells, three kinds of hunks - add, delete, and modify - are tracked and analyzed by an automated tool. Several visualized graphs for each type have been suggested to improve the applicability of the proposed technique. We also conducted experiments on large-scale open projects. The case study results show the applicability of the proposed approach.
  • Takashi WATANABE, Akito MONDEN, Zeynep YÜCEL, Yasutaka KAMEI, Shuji MORISAKI
    IEICE Transactions on Information and Systems
    2018年 E101.D 巻 9 号 2269-2278
    発行日: 2018/09/01
    公開日: 2018/09/01
    ジャーナル フリー

    Association rule mining discovers relationships among variables in a data set, representing them as rules. These are expected to often have predictive abilities, that is, to be able to predict future events, but commonly used rule interestingness measures, such as support and confidence, do not directly assess their predictive power. This paper proposes a cross-validation -based metric that quantifies the predictive power of such rules for characterizing software defects. The results of evaluation this metric experimentally using four open-source data sets (Mylyn, NetBeans, Apache Ant and jEdit) show that it can improve rule prioritization performance over conventional metrics (support, confidence and odds ratio) by 72.8% for Mylyn, 15.0% for NetBeans, 10.5% for Apache Ant and 0 for jEdit in terms of SumNormPre(100) precision criterion. This suggests that the proposed metric can provide better rule prioritization performance than conventional metrics and can at least provide similar performance even in the worst case.

  • Mohsin SHAIKH, Ki-Seong LEE, Chan-Gun LEE
    IEICE Transactions on Information and Systems
    2017年 E100.D 巻 1 号 107-117
    発行日: 2017/01/01
    公開日: 2017/01/01
    ジャーナル フリー

    Packages are re-usable components for faster and effective software maintenance. To promote the re-use in object-oriented systems and maintenance tasks easier, packages should be organized to depict compact design. Therefore, understanding and assessing package organization is primordial for maintenance tasks like Re-usability and Changeability. We believe that additional investigations of prevalent basic design principles such as defined by R.C. Martin are required to explore different aspects of package organization. In this study, we propose package-organization framework based on reachable components that measures re-usability index. Package re-usability index measures common effect of change taking place over dependent elements of a package in an object-oriented design paradigm. A detailed quality assessment on different versions of open source software systems is presented which evaluates capability of the proposed package re-usability index and other traditional package-level metrics to predict fault-proneness in software. The experimental study shows that proposed index captures different aspects of package-design which can be practically integrated with best practices of software development. Furthermore, the results provide insights on organization of feasible software design to counter potential faults appearing due to complex package dependencies.

  • 川尻 憲行, 小野 敦央, 足立 伊佐雄, 堀越 勇
    日本病院薬学会年会講演要旨集
    1996年 6 巻
    発行日: 1996/08/21
    公開日: 2019/03/15
    会議録・要旨集 フリー
  • Qiao YU, Shujuan JIANG, Yanmei ZHANG
    IEICE Transactions on Information and Systems
    2017年 E100.D 巻 2 号 265-272
    発行日: 2017/02/01
    公開日: 2017/02/01
    ジャーナル フリー

    Class imbalance has drawn much attention of researchers in software defect prediction. In practice, the performance of defect prediction models may be affected by the class imbalance problem. In this paper, we present an approach to evaluating the performance stability of defect prediction models on imbalanced datasets. First, random sampling is applied to convert the original imbalanced dataset into a set of new datasets with different levels of imbalance ratio. Second, typical prediction models are selected to make predictions on these new constructed datasets, and Coefficient of Variation (C·V) is used to evaluate the performance stability of different models. Finally, an empirical study is designed to evaluate the performance stability of six prediction models, which are widely used in software defect prediction. The results show that the performance of C4.5 is unstable on imbalanced datasets, and the performance of Naive Bayes and Random Forest are more stable than other models.

  • 泉 富士夫
    まてりあ
    2017年 56 巻 6 号 393-396
    発行日: 2017年
    公開日: 2017/06/01
    ジャーナル フリー
  • Yu KASHIMA, Takashi ISHIO, Shogo ETSUDA, Katsuro INOUE
    IEICE Transactions on Information and Systems
    2015年 E98.D 巻 6 号 1194-1205
    発行日: 2015/06/01
    公開日: 2015/06/01
    ジャーナル フリー
    To understand the behavior of a program, developers often need to read source code fragments in various modules. System-dependence-graph-based (SDG) program slicing is a good candidate for supporting the investigation of data-flow paths among modules, as SDG is capable of showing the data-dependence of focused program elements. However, this technique has two problems. First, constructing SDG requires heavyweight analysis, so SDG is not suitable for daily uses. Second, the results of SDG-based program slicing are difficult to visualize, as they contain many vertices. In this research, we proposed variable data-flow graphs (VDFG) for use in program slicing techniques. In contrast to SDG, VDFG is created by lightweight analysis because several approximations are used. Furthermore, we propose using the fractal value to visualize VDFG-based program slice in order to reduce the graph complexity for visualization purposes. We performed three experiments that demonstrate the accuracy of VDFG program slicing with fractal value, the size of a visualized program slice, and effectiveness of our tool for source code reading.
  • Kunihiro NODA, Takashi KOBAYASHI, Noritoshi ATSUMI
    IEICE Transactions on Information and Systems
    2018年 E101.D 巻 7 号 1751-1765
    発行日: 2018/07/01
    公開日: 2018/07/01
    ジャーナル フリー

    Behaviors of an object-oriented system can be visualized as reverse-engineered sequence diagrams from execution traces. This approach is a valuable tool for program comprehension tasks. However, owing to the massiveness of information contained in an execution trace, a reverse-engineered sequence diagram is often afflicted by a scalability issue. To address this issue, many trace summarization techniques have been proposed. Most of the previous techniques focused on reducing the vertical size of the diagram. To cope with the scalability issue, decreasing the horizontal size of the diagram is also very important. Nonetheless, few studies have addressed this point; thus, there is a lot of needs for further development of horizontal summarization techniques. We present in this paper a method for identifying core objects for trace summarization by analyzing reference relations and dynamic properties. Visualizing only interactions related to core objects, we can obtain a horizontally compactified reverse-engineered sequence diagram that contains system's key behaviors. To identify core objects, first, we detect and eliminate temporary objects that are trivial for a system by analyzing reference relations and lifetimes of objects. Then, estimating the importance of each non-trivial object based on their dynamic properties, we identify highly important ones (i.e., core objects). We implemented our technique in our tool and evaluated it by using traces from various open-source software systems. The results showed that our technique was much more effective in terms of the horizontal reduction of a reverse-engineered sequence diagram, compared with the state-of-the-art trace summarization technique. The horizontal compression ratio of our technique was 134.6 on average, whereas that of the state-of-the-art technique was 11.5. The runtime overhead imposed by our technique was 167.6% on average. This overhead is relatively small compared with recent scalable dynamic analysis techniques, which shows the practicality of our technique. Overall, our technique can achieve a significant reduction of the horizontal size of a reverse-engineered sequence diagram with a small overhead and is expected to be a valuable tool for program comprehension.

  • Shinji Yabuta, Hiroko Kawakami, Akira Narita
    ORNITHOLOGICAL SCIENCE
    2010年 9 巻 2 号 109-114
    発行日: 2010/12/25
    公開日: 2010/12/25
    ジャーナル フリー
    Displays of animals do not always elicit equivalent responses from other conspecifics. Considerable variation exists among the responses, but the mechanism causing the variety remains unclear. We investigated how Black-tailed Gulls Larus crassirostris respond to long-call displays of other individuals. Results show that the response varies depending on the context. Territorial birds often respond to the long-calls of non-territorial birds by attacking. However, non-territorial birds typically respond to the long-calls of territorial birds by avoiding them. Nevertheless, they usually respond to the long-calls of their partners by using the same long-call. These results are explained well by the motivational conflict hypothesis.
  • 田中 昭夫
    医学図書館
    1998年 45 巻 4 号 487-488
    発行日: 1998/12/20
    公開日: 2011/09/21
    ジャーナル フリー
  • 塚本 聡
    英文学研究
    2013年 90 巻 155-160
    発行日: 2013/12/01
    公開日: 2017/04/10
    ジャーナル フリー
  • : 「頻出概念」理解の効果を探る
    神村 伸一, 香野 俊一
    工学・工業教育研究講演会講演論文集
    1997年 1997 巻 (62)
    発行日: 1997年
    公開日: 2017/12/07
    会議録・要旨集 フリー
  • Panita MEANANEATRA, Songsakdi RONGVIRIYAPANISH, Taweesup APIWATTANAPONG
    IEICE Transactions on Information and Systems
    2018年 E101.D 巻 7 号 1766-1779
    発行日: 2018/07/01
    公開日: 2018/07/01
    ジャーナル フリー

    An important step for improving software analyzability is applying refactorings during the maintenance phase to remove bad smells, especially the long method bad smell. Long method bad smell occurs most frequently and is a root cause of other bad smells. However, no research has proposed an approach to repeating refactoring identification, suggestion, and application until all long method bad smells have been removed completely without reducing software analyzability. This paper proposes an effective approach to identifying refactoring opportunities and suggesting an effective refactoring set for complete removal of long method bad smell without reducing code analyzability. This approach, called the long method remover or LMR, uses refactoring enabling conditions based on program analysis and code metrics to identify four refactoring techniques and uses a technique embedded in JDeodorant to identify extract method. For effective refactoring set suggestion, LMR uses two criteria: code analyzability level and the number of statements impacted by the refactorings. LMR also uses side effect analysis to ensure behavior preservation. To evaluate LMR, we apply it to the core package of a real world java application. Our evaluation criteria are 1) the preservation of code functionality, 2) the removal rate of long method characteristics, and 3) the improvement on analyzability. The result showed that the methods that apply suggested refactoring sets can completely remove long method bad smell, still have behavior preservation, and have not decreased analyzability. It is concluded that LMR meets the objectives in almost all classes. We also discussed the issues we found during evaluation as lesson learned.

  • 川尻 憲行, 小野 敦央, 足立 伊佐雄
    病院薬学
    1997年 23 巻 5 号 445-453
    発行日: 1997/10/10
    公開日: 2011/08/11
    ジャーナル フリー
    Updating drug information was simplified and the issuance period shortened for our hospital formulary developed using a commercially available package-insert drug information database. Recently, Local area network (LAN) system has been improved. So we developed a new World Wide Wed (WWW) edition of the hospital formulary based on the same drug information database so it can be accessed by every client terminal in the university. The drug information database was extracted from a CD-ROM by the syllabary order and the order of the pharmacological category to create index files. The text and index files were converted to Hyper Text Markup Language (HTML) using the text processing software Jgawk. Original drug information created by the hospital pharmacy as DI NEWS, including drug interactions and adverse reactions, were also converted to HTML and linked to the drug in the text.
    This formulary was written in HTML, therefore, any kind of client terminal can access this new WWW edition of our hospital formulary on line. Previously, at least 2-3 months were needed to publish a new hospital formulary, however it could be completed within one day by extracting information from a CD-ROM and creating a new edition of the hospital formulary on the WWW server.
  • Toshio Hayashi, Seinosuke Kawashima, Hideki Itoh, Nobuhiro Yamada, Hirohito Sone, Hiroshi Watanabe, Yoshiyuki Hattori, Takashi Ohrui, Masashi Yoshizumi, Koutaro Yokote, Kiyoshi Kubota, Hideki Nomura, Hiroyuki Umegaki, Akihisa Iguchi, on behalf of Japan CDM group
    Circulation Journal
    2008年 72 巻 2 号 218-225
    発行日: 2008年
    公開日: 2008/01/25
    ジャーナル フリー
    Background The respective incidences of ischemic heart and cerebrovascular disease (IHD, CVD) are high in diabetic individuals. Complications of dyslipidemia increase the risk, but direct evidence is limited, so a cohort prospective study (Japan-CDM) was conducted. Methods and Results The study group comprised 4,014 subjects with type 2 diabetes (1,936 women, 2,078 men; mean age 67.4±9.5 years) who were divided into dyslipidemic patients (79.1%) with or without medication (medicated, 50.9%; not medicated, 28.2%) and normo-lipidemic patients (20.9%). The incidence of IHD, CVD, arteriosclerosis obliterans (ASO), congestive heart failure (CHF) and death was assessed. IHD and CVD occurred in 0.82 and 0.67%, respectively, during the first year following registration. CHF, ASO and sudden death occurred in 0.27%, 0.12% and 0.12%, respectively. There was a significant statistical difference in the relation of elevated levels of high-density lipoprotein-cholesterol to lower rates of IHD and CVD. IHD and CVD in males were dependent on the level of low-density lipoprotein-cholesterol (LDL-C): 0.45%, 1.56%, 1.78%, 1.91% and 2.34% were observed in less than 2.11, 2.11-2.62, 2.63-3.15, 3.16-3.67, and more than 3.68 mol/L of LDL-C. In the lowest LDL-C group, death other than from vascular diseases was increased. Age, sex (male) and complicated hypertension increased the risk of events. Patients who were prescribed antihyperlipidemic agents suffered less events than patients who were not being treated, which suggests direct effects of therapy. Conclusion Strict lipid control may be effective for reducing the incidence of vascular events in Japanese diabetic individuals. (Circ J 2008; 72: 218 - 225)
  • Rizky Januar AKBAR, Takayuki OMORI, Katsuhisa MARUYAMA
    IEICE Transactions on Information and Systems
    2014年 E97.D 巻 5 号 1069-1083
    発行日: 2014/05/01
    公開日: 2014/05/01
    ジャーナル フリー
    Developers often face difficulties while using APIs. API usage patterns can aid them in using APIs efficiently, which are extracted from source code stored in software repositories. Previous approaches have mined repositories to extract API usage patterns by simply applying data mining techniques to the collection of method invocations of API objects. In these approaches, respective functional roles of invoked methods within API objects are ignored. The functional role represents what type of purpose each method actually achieves, and a method has a specific predefined order of invocation in accordance with its role. Therefore, the simple application of conventional mining techniques fails to produce API usage patterns that are helpful for code completion. This paper proposes an improved approach that extracts API usage patterns at a higher abstraction level rather than directly mining the actual method invocations. It embraces a multilevel sequential mining technique and uses categorization of method invocations based on their functional roles. We have implemented a mining tool and an extended Eclipse's code completion facility with extracted API usage patterns. Evaluation results of this tool show that our approach improves existing code completion.
  • 上村 隆一
    コンピュータ&エデュケーション
    1997年 3 巻 25-29
    発行日: 1997/11/30
    公開日: 2015/02/03
    ジャーナル フリー
  • 後路 啓子
    情報管理
    1999年 42 巻 3 号 222-229
    発行日: 1999年
    公開日: 2001/04/01
    ジャーナル フリー
    『情報処理』は,(社)情報処理学会が会員向けに月1回発行している学会誌である。1998年1月号よりB5判からA4判へと判型を変更し,さらに4月号からは編集長制度導入による一貫した編集方針のもと,学会誌につきものの難しいというイメージを脱し,専門家以外の読者にも読みやすく分かりやすい雑誌を目指し内容を一新した。同時に事務局による編集作業も完全なディジタル化を目指し,一部導入していたDTP(Desk Top Publishing)システムを本格的に活用しはじめた。本稿ではDTP導入による実際の編集作業とその効果について,具体的に紹介する。
feedback
Top