2020 Volume 37 Issue 2 Pages 2_97-2_103
Much research has been done on coverage-based fuzzing tools that can generate test cases and adapt them to a testing target automatically. However, it is difficult to compare the performance of the tools because different testing targets are often used among experiments of the tools. In this paper,we performed an empirical study of four coverage-based fuzzing tools. As benchmarks, we use three collections of testing targets with their experimental setting from earlier studies. As a result, we confirmed that newer tools are statistically more effective except in a few cases.