Research Output
Improving data quality in data warehousing applications
  There is a growing awareness that high quality of data is a key to today’s business success and dirty data that exits within data sources is one of the reasons that cause poor data quality. To ensure high quality, enterprises need to have a process, methodologies and resources to monitor and analyze the quality of data, methodologies for preventing and/or detecting and repairing dirty data. However in practice, detecting and cleaning all the dirty data that exists in all data sources is quite expensive and unrealistic. The cost of cleaning dirty data needs to be considered for most of enterprises. Therefore conflicts may arise if an organization intends to clean their data warehouses in that how do they select the most important data to clean based on their business requirements. In this paper, business rules are used to classify dirty data types based on data quality dimensions. The proposed method will be able to help to solve this problem by allowing users to select the appropriate group of dirty data types based on the priority of their business requirements. It also provides guidelines for measuring the data quality with respect to different data quality dimensions and also will be helpful for the development of data cleaning tools.

  • Date:

    31 December 2010

  • Publication Status:

    Published

  • Publisher

    SciTePress

  • DOI:

    10.5220/0002903903790382

  • Library of Congress:

    QA75 Electronic computers. Computer science

  • Dewey Decimal Classification:

    005 Computer programming, programs & data

Citation

Li, L., Peng, T., & Kennedy, J. (2010). Improving data quality in data warehousing applications. In J. Filipe, & J. Cordeiro (Eds.), Proceedings of the 12th International Conference on Enterprise Information Systems, 379-382. https://doi.org/10.5220/0002903903790382

Authors

Keywords

Data quality; dirty data; data cleaning tools; data warehousing;

Monthly Views:

Available Documents