海洋調査技術
Online ISSN : 2185-4920
Print ISSN : 0915-2997
ISSN-L : 0915-2997
論説
海洋データセッ卜作成·管理に際して発生し易い誤りとその原因 -II. 岩手県水産技術センターの事例と重複データの取り扱い-
小熊 幸子鈴木 亨永田 豊渡辺 秀俊山口 初代高杉 知
著者情報
ジャーナル フリー

1999 年 11 巻 2 号 p. 2_11-2_18

詳細
抄録

After oceanic data were archived by a data management agency such as JODC (Japan Oceanographic Data Center), even if questionable data are found, it is hard to send such data back to their originator for correction. They would not be eliminated from the dataset, but some error flag is put on them. However, it is desirable to minimize the number of questionable data. We investigated error sources which often happen in data processing, collection and storage processes, in order to find the way to improve the quality of data flowing into JODC/MIRC system. In the previous paper (Nagata et al., 1999), we analyzed dataset obtained by the Wakayama Research Center of Agriculture, Forestry and Fisheries (WRCAFF), and found that errors are mainly generated in punching processes. In this paper, we report the results of the analysis on the database of the Iwate Fisheries Technology Center (IFTC). The data quality in IFTC was much improved after 1970, as just as in WRCAFF. Many duplicated data were found in the database of IFTC. Main cause of the occurrence of duplication is that they make two kinds of dataset (Coastal Lines and Offshore Lines), and that the data obtained at some stations were sometimes sent to both of the dataset. Check of duplicated data is important in data management. We discuss techniques of duplication check by referring the case of IFTC.

著者関連情報
© 1999 海洋調査技術学会
前の記事 次の記事
feedback
Top