Organizations are increasingly using data to make decisions and drive innovation. However, building data-driven applications can be challenging. It often requires multiple teams working together and integrating various data sources, tools, and services. For example, creating a targeted marketing app involves data engineers, data scientists, and business analysts using different systems and tools. This complexity leads to several issues: it takes time to learn multiple systems, it’s difficult to manage data and code across different services, and controlling access for users across various systems is complicated. Currently, organizations often create custom solutions to connect these systems, but they want a more unified approach that them to choose the best tools while providing a streamlined experience for their data teams. The use of separate data warehouses and lakes has created data silos, leading to problems such as lack of interoperability, duplicate governance efforts, complex architectures, and slower time to value.
You can use Amazon SageMaker Lakehouse to achieve unified access to data in both data warehouses and data lakes. Through SageMaker Lakehouse, you can use preferred analytics, machine learning, and business intelligence engines through an open, Apache Iceberg REST API to help ensure secure access to data with consistent, fine-grained access controls.
Solution overview
Let’s consider Example Retail Corp, which is facing increasing customer churn. Its management wants to implement a data-driven approach to identify at-risk customers and develop targeted retention strategies. However, the customer data is scattered across different systems and services, making it challenging to perform comprehensive analyses. Today, Example Retail Corp manages sales data in its data warehouse and customer data in Apache Iceberg tables in Amazon Simple Storage Service (Amazon S3). It uses Amazon EMR Serverless for data processing and machine learning. For governance, it uses AWS Glue Data Catalog as the central technical catalog and AWS Lake Formation as the permission store for enforcing fine-grained access controls. Its main objective is to implement a unified data management system that now combines data from varied sources, enables secure access across enterprise, and allow disparate teams to use preferred tools to predict, analyze, and consume customer churn information.
Let’s examine how Example Retail Corp can use SageMaker Lakehouse to achieve its unified data management vision using this reference architecture diagram.
Personas
There are four personas used in this solution.
- The Data Lake Admin has an AWS Identity and Access Management (IAM) admin role and is a Lake Formation administrator responsible for managing user permissions to catalog objects using Lake Formation.
- The Data Warehouse Admin has an IAM admin role and manages databases in Amazon Redshift.
- The Data Engineer has an IAM ETL role and runs the extract, transform, and load (ETL) pipeline using Spark to populate the Lakehouse catalog on RMS.
- The Data Analyst has an IAM analyst role and performs churn analysis on SageMaker Lakehouse data using Amazon Athena and Amazon Redshift.
Dataset
The following table describes the elements of the dataset.
Schema | Table | Data source |
public |
customer_churn |
Lakehouse catalog with storage on RMS |
customerdb |
customer |
Lakehouse catalog with storage on Amazon S3 |
sales |
store_sales |
Data warehouse |
Prerequisites
To follow along on the solution walkthrough, you need to have the following:
- Create a user defined IAM role following the instruction in Requirements for roles used to register locations. For this post, we will use IAM role
LakeFormationRegistrationRole
. - An Amazon Virtual Private Cloud (Amazon VPC) with private and public subnets.
- Create an S3 bucket. For this post, we will use
customer_data
as the bucket name. - Create an Amazon Redshift serverless endpoint called
sales_dw
which will hoststore_sales
dataset. - Create an Amazon Redshift serverless endpoint called
sales_analysis_dw
for churn analysis by sales analysts. - Create an IAM role named
DataTransferRole
following the instructions in Prerequisites for managing Amazon Redshift namespaces in the AWS Glue Data Catalog. - Install or update the latest version of the AWS CLI. For instructions, see Installing or updating to the latest version of the AWS CLI.
- Create a data lake admin using the instructions in Create a data lake administrator. For this post, we will use an IAM role called Admin.
Configure Datalake administrators :
Sign in to the AWS Management Console as Admin and go to AWS Lake Formation. In the navigation pane, choose Administration roles and then choose Tasks under Administration. Under Data lake administrators, choose Add:
- In the Add administrators page, under Access type, choose Data lake administrator.
- Under IAM users and roles, select Admin. Choose Confirm.
- On the Add administrators page, for Access type select Read-only administrators. Under IAM users and roles, select AWSServiceRoleForRedshift and choose Conrm. This step enables Amazon Redshift to discover and access catalog objects in AWS Glue Data Catalog.
Solution walkthrough
Create a customer table in the Amazon S3 data lake in AWS Glue Data Catalog
- Create an AWS Glue database called
customerdb
in the default catalog in your account by going to the AWS Lake Formation console and choosing Databases in the navigation pane. - Select the database that you just created and choose Edit.
- Clear the checkbox Use only IAM access control for new tables in this database.
- Sign in to the Athena console as Admin and select Workgroup that the role has access to. Run the following SQL:
- Register the S3 bucket with Lake Formation:
- Sign in to the Lake Formation console as Data Lake Admin.
- In the navigation pane, choose Administration, and then choose Data lake locations.
- Choose Register location.
- For the Amazon S3 path, enter
s3://customer_data/
. - For the IAM role, choose LakeFormationRegistrationRole.
- For Permission mode, select Lake Formation.
- Choose Register location.
Create the salesdb database in Amazon Redshift
- Sign in to the Redshift endpoint
sales_dw
as Admin user. Run following script to create a database namedsalesdb
. - Connect to
salesdb
. Run the following script to create schemasales
and thestore_sales
table and populate it with data.
Create the churn_lakehouse RMS catalog in Glue Data Catalog
This catalog will contain the customer churn table with managed RMS storage, which will be populated using Amazon EMR.
We will manage the customer churn data in an AWS Glue managed catalog with managed RMS storage. This data is produced from an analysis conducted in EMR Serverless and is accessible in the presentation layer to serve to business intelligence (BI) applications.
Create Lakehouse (RMS) catalog
- Sign in to the Lake Formation console as Data Lake Admin.
- In the left navigation pane, choose Data Catalog, and then Catalogs New. Choose Create catalog.
- Provide the details for the catalog:
- Name: Enter
churn_lakehouse
. - Type: Select Managed catalog.
- Storage: Select Redshift.
- Under Access from engines, make sure that Access this catalog from Iceberg compatible engines is selected.
- Choose Next.
- Name: Enter
-
- Under Principals, select IAM users and roles. Under IAM users and roles, select the Admin Under Catalog permissions, select Super user.
- Choose Add, and then choose Create catalog.
- Under Principals, select IAM users and roles. Under IAM users and roles, select the Admin Under Catalog permissions, select Super user.
Access churn_lakehouse RMS catalog from Amazon EMR Spark engine
- Set up an EMR Studio.
- Create an EMR Serverless application using CLI command.
Sign in to EMR Studio and use the EMR Studio Workspace
- Sign in to the EMR Studio console and choose Workspaces in the navigation pane, and then choose Create Workspace.
- Enter a name and a description for the Workspace.
- Choose Create Workspace. A new tab containing JupyterLab will open automatically when the Workspace is ready. Enable pop-ups in your browser if necessary.
- Choose the Compute icon in the navigation pane to attach the EMR Studio Workspace with a compute engine.
- Select EMR Serverless application for Compute type.
- Choose
Churn_Analysis
for EMR-S Application. - For Runtime role, choose Admin.
- Choose Attach.
Download the notebook, import it, choose PySpark kernel and execute the cells that will create the table.
Manage your users’ fine-grained access to catalog objects using AWS Lake Formation
Grant the following permissions to the Analyst role on the resources as shown in the following table.
Catalog | Database | Table | Permission |
<account_id>:churn_lakehouse/dev |
public |
customer_churn |
Column permission: |
<account_id> |
customerdb |
customer |
Table permission |
<account_id>:sales_lakehouse/salesdb |
sales |
store_sales |
All table permission |
- Sign in to the Lake Formation console as Data Lake Admin. In the navigation pane, choose Data Lake Permissions, and then choose Grant.
- For IAM user and roles, choose Analyst IAM role. For resources choose as shown below and grant.
- For IAM user and roles, choose Analyst IAM Role. For resource choose as shown below and grant.
- For IAM user and roles, choose Analyst IAM Role. For resource choose as shown below and grant.
Perform churn analysis using multiple engines:
Using Athena
Sign in to the Athena console using the IAM Analyst role, select the workgroup that the role has access to. Run the following SQL combining data from the data warehouse and Lake House RMS catalog for churn analysis:
The following figure shows the results, which include customer IDs, names, and other information.
Using Amazon Redshift
Sign in to the Redshift Sale cluster QEV2 using the IAM Analyst role. Sign in using temporary credentials using your IAM identity and run the following SQL command:
The following figure shows the results, which include customer IDs, names, and other information.
Clean up
Complete the following steps to delete the resources you created to avoid unexpected costs:
- Deletethe Redshift Serverless workgroups.
- Deletethe Redshift Serverless associated namespace.
- Delete EMR Studio and Application created.
- Delete Glue resources and Lake Formation permissions.
- Empty the bucket and delete the bucket.
Conclusion
In this post, we showcased how you can use Amazon SageMaker Lakehouse to achieve unified access to data across your data warehouses and data lakes. With unified access, you can use preferred analytics, machine learning, and business intelligence engines through an open, Apache Iceberg REST API and secure access to data with consistent, fine-grained access controls. Try Amazon SageMaker Lakehouse in your environment and share your feedback with us.
About the Authors
Srividya Parthasarathy is a Senior Big Data Architect on the AWS Lake Formation team. She works with product team and customer to build robust features and solutions for their analytical data platform. She enjoys building data mesh solutions and sharing them with the community.
Harshida Patel is a Analytics Specialist Principal Solutions Architect, with AWS.