Today, customers are embarking on data modernization programs by migrating on-premises data warehouses and data lakes to the AWS Cloud to take advantage of the scale and advanced analytical capabilities of the cloud. Customers are migrating their on-premises data warehouse solutions built on databases like Netezza, PostgreSQL, Greenplum, and Teradata to AWS based modern data platforms using services like Amazon Simple Storage Service (Amazon S3) and Amazon Redshift. AWS based modern data platforms help you break down data silos and enable analytics and machine learning (ML) use cases at scale.
During migration, you might want to establish data parity checks between on-premises databases and AWS data platform services. Data parity is a process to validate that data was migrated successfully from source to target without any errors or failures. A successful data parity check means that data in the target platform has the equivalent content, values, and completeness as that of the source platform.
Data parity can help build confidence and trust with business users on the quality of migrated data. Additionally, it can help you identify errors in the new cloud-based extract, transform, and load (ETL) process.
Some customers build custom in-house data parity frameworks to validate data during migration. Others use open source data quality products for data parity use cases. These options involve a lot of custom code, configurations, and installation, and have scalability challenges. This takes away important person hours from the actual migration effort into building and maintaining a data parity framework.
In this post, we show you how to use AWS Glue Data Quality, a feature of AWS Glue, to establish data parity during data modernization and migration programs with minimal configuration and infrastructure setup. AWS Glue Data Quality enables you to automatically measure and monitor the quality of your data in data repositories and AWS Glue ETL pipelines.
Overview of solution
In large data modernization projects of migrating from an on-premises database to an Amazon S3 based data lake, it’s common to have the following requirements for data parity:
- Compare one-time historical data from the source on-premises database to the target S3 data lake.
- Compare ongoing data that is replicated from the source on-premises database to the target S3 data lake.
- Compare the output of the cloud-based new ETL process with the existing on-premises ETL process. You can plan a period of parallel runs, where the legacy and new systems run in parallel, and the data is compared daily.
- Use functional queries to compare high-level aggregated business metrics between the source on-premises database and the target data lake.
In this post, we use an example of PostgreSQL migration from an on-premises database to an S3 data lake using AWS Glue Data Quality.
The following diagram illustrates this use case’s historical data migration architecture.
The architecture shows a common pattern for on-premises databases (like PostgreSQL) to Amazon S3 based data lake migration. The workflow includes the following steps:
- Schemas and tables are stored in an on-premises database (PostgreSQL), and you want to migrate to Amazon S3 for storage and AWS Glue for compute.
- Use AWS Database Migration Service (AWS DMS) to migrate historical data from the source database to an S3 staging bucket.
- Use AWS Glue ETL to curate data from the S3 staging bucket to an S3 curated bucket. In the curated bucket, AWS Glue tables are created using AWS Glue crawlers or an AWS Glue ETL job.
- Use an AWS Glue connection to connect AWS Glue with the on-premises PostgreSQL database.
- Use AWS Glue Data Quality to compare historical data from the source database to the target S3 bucket and write results to a separate S3 bucket.
The following diagram illustrates the incremental data migration architecture.
After historical data is migrated and validated, the workflow proceeds to the following steps:
- Ingest incremental data from the source systems to the S3 staging bucket. This is done using an ETL ingestion tool like AWS Glue.
- Curate incremental data from the S3 staging bucket to the S3 curated bucket using AWS Glue ETL.
- Compare the incremental data using AWS Glue Data Quality.
In the next sections, we demonstrate how to use AWS Glue Data Quality to establish data parity between source (PostgreSQL) and target (Amazon S3). We cover the following scenarios:
- Establish data parity for historical data migration – Historical data migration is defined as a one-time bulk data migration of historical data from legacy on-premises databases to the AWS Cloud. The data parity process maintains the validity of migrated historical data.
- Establish data parity for incremental data – After the historical data migration, incremental data is loaded to Amazon S3 using the new cloud-based ETL process. The incremental data is compared between the legacy on-premises database and the AWS Cloud.
- Establish data parity using functional queries – We perform business- and functional-level checks using SQL queries on migrated data.
Prerequisites
You need to set up the following prerequisite resources:
Establish data parity for historical data migration
For historical data migration and parity, we’re assuming that setting up a PostgreSQL database, migrating data to Amazon S3 using AWS DMS, and data curation have been completed as a prerequisite to perform data parity using AWS Glue Data Quality. For this use case, we use an on-premises PostgreSQL database with historical data loaded on Amazon S3 and AWS Glue. Our objective is to compare historical data between the on-premises database and the AWS Cloud.
We use the following tables in the on-premises PostgreSQL database. These have been migrated to the AWS Cloud using AWS DMS. As part of data curation, the following three additional columns have been added to the test_schema.sample_data
table in the curated layer: id
, etl_create_ts
, and etl_user_id
.
- Create
sample_data
with the following code:
- Create
gender
with the following code (contains gender details for lookup):
- Create
blood_group
with the following code (contains blood group information for lookup):
We’re assuming that the preceding tables have been migrated to the S3 staging bucket using AWS DMS and curated using AWS Glue. For detailed instructions on how to set up AWS DMS to replicate data, refer to the appendix at the end of this post.
In the following sections, we showcase how to configure an AWS Glue Data Quality job for comparison.
Create an AWS Glue connection
AWS Glue Data Quality uses an AWS Glue connection to connect to the source PostgreSQL database. Complete the following steps to create the connection:
- On AWS Glue console, under Data Catalog in the navigation pane, choose Connections.
- Choose Create connection.
- Set Connector type as JDBC.
- Add connection details like the connection URL, credentials, and networking details.
Refer to AWS Glue connection properties for additional details.
AWS Glue 4.0 uses PostgreSQL JDBC driver 42.3.6. If your PostgreSQL database requires a different version JDBC driver, download the JDBC driver corresponding to your PostgreSQL version.
Create an AWS Glue data parity job for historical data comparison
As part of the preceding steps, you used AWS DMS to pull historical data from PostgreSQL to the S3 staging bucket. You then used an AWS Glue notebook to curate data from the staging bucket to the curated bucket and created AWS Glue tables. As part of this step, you use AWS Glue Data Quality to compare data between PostgreSQL and Amazon S3 to confirm the data is valid. Complete the following steps to create an AWS Glue job using the AWS Glue visual editor to compare data between PostgreSQL and Amazon S3:
- Set the source as the PostgreSQL table
sample_data
. - Set the target as the AWS Glue table
sample_data
.
- In the curated layer, we added a few additional columns:
id
,etl_create_ts
, andetl_user_id
. Because these columns are newly created, we use a transformation to drop these columns for comparison. - Additionally, the
birth_date
column is a timestamp in AWS Glue, so we change it to date format prior to comparison.
- Choose Evaluate Data Quality in Transformations.
- Specify the AWS Glue Data Quality rule as DatasetMatch, which checks if the data in the primary dataset matches the data in a reference dataset.
- Provide the unique key (primary key) information for source and target. In this example, the primary key is a combination of columns job and username.
- For Data quality transform output, specify your data to output:
- Original data – This output includes all rows and columns in original data. In addition, you can select Add new columns to indicate data quality errors. This option adds metadata columns for each row that can be used to identify valid and invalid rows and the rules that failed validation. You can further customize row-level output to select only valid rows or convert the table format based on the use case.
- Data quality results – This is a summary output grouped by a rule. For our data parity example, this output will have one row with a summary of the match percentage.
- Configure the Amazon S3 targets for
ruleOutcomes
androwLevelOutcomes
to write AWS Glue Data Quality output in the Amazon S3 location in Parquet format.
- Save and run the AWS Glue job.
- When the AWS Glue job is complete, you can run AWS Glue crawler to automatically create
rulesummary
androw_level_output
tables and view the output in Amazon Athena.
The following is an example of rule-level output. The screenshot shows the DatasetMatch value as 1.0, which implies all rows between the source PostgreSQL database and target data lake matched.
The following is an example of row-level output. The screenshot shows all source rows along with additional columns that confirm if a row has passed or failed validation.
Let’s update a few records in PostgreSQL to simulate a data issue during the data migration process:
You can rerun the AWS Glue job and observe the output in Athena. In the following screenshot, the new match percentage is 87.27%. With this example, you were able to capture the simulated data issue with AWS Glue Data Quality successfully.
If you run the following query, the output will match the record count with the preceding screenshot:
Establish data parity for incremental data
After the initial historical migration, the next step is to implement a process to validate incremental data between the legacy on-premises database and the AWS Cloud. For incremental data validation, data output from the existing ETL process and the new cloud-based ETL process is compared daily. You can add a filter to the preceding AWS Glue data parity job to select data that has been modified for a given day using a timestamp column.
Establish data parity using functional queries
Functional queries are SQL statements that business analysts can run in the legacy system (for this post, an on-premises database) and the new AWS Cloud-based data lake to compare data metrics and output. To make sure the consumer applications work correctly with migrated data, it’s imperative to validate data functionally. The previous examples are primarily technical validation to make sure there is no data loss in the target data lake after data ingestion from both historical migration and change data capture (CDC) context. In a typical data warehouse migration use case, the historical migration pipeline often pulls data from a data warehouse, and the incremental or CDC pipeline integrates the actual source systems, which feed the data warehouse.
Functional data parity is the third step in the overall data validation framework, where you have the flexibility to continue similar business metrics validation driven by an aggregated SQL query. You can construct your own business metrics validation query, preferably working with subject matter experts (SMEs) from the business side. We have noticed that agility and perfection matter for a successful data warehouse migration, therefore reusing the time-tested and business SME-approved aggregated SQL query from the legacy data warehouse system with minimal changes can fast-track the implementation as well as maintain business confidence. In this section, we demonstrate how to implement a sample functional parity for a given dataset.
In this example, we use a set of source PostgreSQL tables and target S3 data lake tables for comparison. We use an AWS Glue crawler to create Data Catalog tables for the source tables, as described in the first example.
The sample functional validation compares the distribution count of gender and blood group for each company. This could be any functional query that joins facts and dimension tables and performs aggregations.
You can use a SQL transformation to generate an aggregated dataset for both the source and target query. In this example, the source query uses multiple tables. Apply SQL functions on the columns and required filter criteria.
The following screenshot illustrates the Source Functional Query transform.
The following screenshot illustrates the Target Functional Query transform.
The following screenshot illustrates the Evaluate Data Quality transform. You can apply the DatasetMatch rule to achieve a 100% match.
After the job runs, you can find the job run status on AWS Glue console.
The Data quality tab displays the data quality results.
AWS Glue Data Quality provides row- and rule-level outputs, as described in the previous examples.
Check the rule-level output in the Athena table. The outcome of the DatasetMatch rule shows a 100% match between the PostgreSQL source dataset and target data lake.
Check the row-level output in the Athena table. The following screenshot displays the row-level output with data quality evaluation results and rule status.
Let’s change the company value for Spencer LLC to Spencer LLC – New to simulate the impact on the data quality rule and overall results. This creates a gap in the count of records for the given company name while comparing source and target.
By rerunning the job and checking the AWS Glue Data Quality results, you will discover that the data quality rule has failed. This is due to the difference in company name between the source and target dataset because the data quality rule evaluation is tracking a 100% match. You can reduce the match percentage in the data quality expression based on the required threshold.
Next, revert the changes made for the data quality rule failure simulation.
If you rerun the job and validate the AWS Glue Data Quality results, you can find the data quality score is back to 100%.
Clean up
If you no longer want to keep the resources you created as part of this post in your AWS account, complete the following steps:
- Delete the AWS Glue notebook and visual ETL jobs.
- Remove all data and delete the staging and curated S3 buckets.
- Delete the AWS Glue connection to the PostgreSQL database.
- Delete the AWS DMS replication task and instance.
- Delete the Data Catalog.
Conclusion
In this post, we discussed how you can use AWS Glue Data Quality to build a scalable data parity pipeline for data modernization programs. AWS Glue Data Quality enables you to maintain the quality of your data by automating many of the manual tasks involved in data quality monitoring and management. This helps prevent bad data from entering your data lakes and data warehouses. The examples in this post provided an overview on how to set up historical, incremental, and functional data parity jobs using AWS Glue Data Quality.
To learn more about AWS Glue Data Quality, refer to Evaluating data quality with AWS Glue Studio and AWS Glue Data Quality. To dive into the AWS Glue Data Quality APIs, see Data Quality API.
Appendix
In this section, we demonstrate how to set up AWS DMS and replicate data. You can use AWS DMS to copy one-time historical data from the PostgreSQL database to the S3 staging bucket. Complete the following steps:
- On the AWS DMS console, under Migrate data in the navigation pane, choose Replication instances.
- Choose Create a replication instance.
- Choose a VPC that has connectivity to the PostgreSQL instance.
After the instance is created, it should appear with the status as Available on the AWS DMS console.
- 4. Based on our solution architecture, you now create an S3 staging bucket for AWS DMS to write replicated output. For this post, the staging bucket name is
gluedq-blog-dms-staging-bucket
. - Under Migrate data in the navigation pane, choose Endpoints.
- Create a source endpoint for the PostgreSQL connection.
- After you create the source endpoint, choose Test endpoint to make sure it’s connecting successfully to the PostgreSQL Instance.
- Similarly, create a target endpoint with the S3 staging bucket as a target and test the target endpoint.
- We’ll be writing replicated output from PostgreSQL in CSV format. the
addColumnName=true;
property in the AWS DMS configuration to make sure the schema information is written as headers in CSV output.
Now you’re ready to create the migration task.
- Under Migrate data in the navigation pane, choose Database migration tasks.
- Create a new replication task.
- Specify the source and target endpoints you created and choose the table that needs to be replicated.
After the replication task is created, it will start replicating data automatically.
When the status shows as Load complete, data should appear in the following S3 locations (the bucket name in this example is a placeholder):
s3://<gluedq-blog-dms-staging-bucket>/staging_layer/test_schema/sample_data/
s3://<gluedq-blog-dms-staging-bucket>/staging_layer/test_schema/gender/
s3://<gluedq-blog-dms-staging-bucket>/staging_layer/test_schema/blood_group/
About the Authors
Himanshu Sahni is a Senior Data Architect in AWS Professional Services. Himanshu specializes in building Data and Analytics solutions for enterprise customers using AWS tools and services. He is an expert in AI/ ML and Big Data tools like Spark, AWS Glue and Amazon EMR. Outside of work, Himanshu likes playing chess and tennis.
Arunabha Datta is a Senior Data Architect at AWS Professional Services. He collaborates with customers and partners to create and execute modern data architecture using AWS Analytics services. Arunabha’s passion lies in assisting customers with digital transformation, particularly in the areas of data lakes, databases, and AI/ML technologies. Besides work, his hobbies include photography and he likes to spend quality time with his family.
Charishma Ravoori is an Associate Data & ML Engineer at AWS Professional Services. She focuses on developing solutions for customers that include building out data pipelines, developing predictive models and generating ai chatbots using AWS/Amazon tools. Outside of work, Charishma likes to experiment with new recipes and play the guitar.