How does it work?
Data is extracted from the data source of an application (step 1). This data is automatically evaluated by the Data Quality Monitor (step 4), according to the business rules resulting from the data definitions (step 2). Exceptions to those business rules are detected, collected, and presented to the data owner (step 3), who can now take appropriate corrective action, either by correcting the data, or by fine-tuning the data definitions.
By repeatedly going through this process, a continuous improvement of the quality of the data, and of the optimization process leading to it, is achieved (step 5).
A real-world example: NedZink
NedZink is a good example of a company that has committed to optimizing the quality of their data.
A number of their larger DIY customers had asked NedZink to exchange article data through EDI. It did not take long for NedZink to realize that the quality of their master data did just not cut it for this kind of operation.
Their data was appropriate for internal use, where it was usable despite a few extra handling steps, but an external party would not accept that.
NedZink decided to go all the way and to make data quality optimization a standard step in their process. The recurrent cleansing, in combination with the Data Quality Monitor, have dramatically increased the quality of the data.
The obtained level of quality and the speed at which the project was finalized made that NedZink was ready for electronic data exchange long before their own customers were.