Customers often want to augment and enrich SAP source data with other non-SAP source data. Such analytic use cases can be enabled by building a data warehouse or data lake. Customers can now use the AWS Glue SAP OData connector to extract data from SAP. The SAP OData connector supports both on-premises and cloud-hosted (native and SAP RISE) deployments. By using the AWS Glue OData connector for SAP, you can work seamlessly with your data on AWS Glue and Apache Spark in a distributed fashion for efficient processing. AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development.
AWS Glue OData connector for SAP uses the SAP ODP framework and OData protocol for data extraction. This framework acts in a provider-subscriber model to enable data transfers between SAP systems and non-SAP data targets. The ODP framework supports full data extraction and change data capture through the Operational Delta Queues (ODQ) mechanism. As a source for data extraction for SAP, you can use SAP data extractors, ABAP CDS views, SAP BW, or BW/4 HANA sources, HANA information views in SAP ABAP sources, or any ODP-enabled data sources.
SAP source systems can hold historical data, and can receive constant updates. For this reason, it’s important to enable incremental processing of source changes. This blog post details how you can extract data from SAP and implement incremental data transfer from your SAP source using the SAP ODP OData framework with source delta tokens.
Solution overview
Example Corp wants to analyze the product data stored in their SAP source system. They want to understand their current product offering, in particular the number of products that they have in each of their material groups. This will include joining data from the SAP material master and material group data sources from their SAP system. The material master data is available on incremental extraction, while the material group is only available on a full load. These data sources should be combined and available to query for analysis.
Prerequisites
To complete the solution presented in the post, start by completing the following prerequisite steps:
- Configure operational data provisioning (ODP) data sources for extraction in the SAP Gateway of your SAP system.
- Create an Amazon Simple Storage Service (Amazon S3) bucket to store your SAP data.
- In an AWS Glue Data Catalog, create a database called
sapgluedatabase
. - Create an AWS Identity and Access Management (IAM) role for the AWS Glue extract, transform, and load (ETL) job to use. The role must grant access to all resources used by the job, including Amazon S3 and AWS Secrets Manager. For the solution in this post, name the role
GlueServiceRoleforSAP
. Use the following policies:- AWS managed policies:
- Inline policy:
Create the AWS Glue connection for SAP
The SAP connector supports both CUSTOM (this is SAP BASIC authentication) and OAUTH authentication methods. For this example, you will be connecting with BASIC authentication.
- Use the AWS Management Console for AWS Secrets Manager to create a secret called
ODataGlueSecret
for your SAP source. Details in AWS Secrets Manager should include the elements in the following code. You will need to enter your SAP system username in place of <your SAP username> and its password in place of <your SAP username password>. - Create the AWS Glue connection
GlueSAPOdata
for your SAP system by selecting the new SAP OData data source. - Configure the connection with the appropriate values for your SAP source.
- Application host URL: The host must have the SSL certificates for the authentication and validation of your SAP host name.
- Application service path:
/sap/opu/odata/iwfnd/catalogservice;v=2;
- Port number: Port number of your SAP source system.
- Client number: Client number of your SAP source system.
- Logon language: Logon language of your SAP source system.
- In the Authentication section, select CUSTOM as the Authentication Type.
- Select the AWS Secret created in the preceding steps: SAPODataSecret.
- In the Network Options section enter the VPC, subnet and security group used for the connection to your SAP system. For more information on connecting to your SAP system, see Configure a VPC for your ETL job.
Create an ETL job to ingest data from SAP
In the AWS Glue console, create a new Visual Editor AWS Glue job.
- Go to the AWS Glue console.
- In the navigation pane under ETL Jobs choose Visual ETL.
- Choose Visual ETL to create a job in the Visual Editor.
- For this post, edit the default name to be Material Master Job and choose Save.
On your Visual Editor canvas, select your SAP sources.
- Choose the Visual tab, then choose the plus sign to open the Add nodes menu. Search for
SAP
and add the SAP OData Source. - Choose the node you just added and name it
Material Master Attributes
.- For SAP OData connection, select the GlueSAPOData connection.
- Select the material attributes, service and entity set from your SAP source.
- For Entity Name and Sub Entity Name, select SAP OData entity from your SAP source.
- From the Fields, select Material, Created on, Material Group, Material Type, Old Matl number, GLUE_FETCH_SQ, DELTA_TOKEN and DML_STATUS.
- Enter limit 100 in the filter section, to limit the data for design time.
Note that this service supports delta extraction, so Incremental transfer is the default selected option.
After the AWS Glue service role details have been chosen, the data preview is available. You can adjust the preview to include the three new available fields, which are:
glue_fetch_sq
: This is a sequence field, generated from the EPOC timestamp in the order the record was received and is unique for each record. This can be used if you need to know or establish the order of changes in the source system.delta_token
: All records will have this field value blank, except for the last passed record, which will contain the value for the ODQ token to capture any changed records (CDC). This record is not a transactional record from the source and is only there for the purpose of passing the delta token value.dml_status
: This will show UPDATED for all newly inserted and updated records from the source and DELETED for records that have been deleted from source.
For delta enabled extraction, the last record passed will contain the value DELTA_TOKEN
and the delta_token field will be filled as mentioned above.
- Add another SAP ODATA source connection to your canvas, and name this node
Material Group Text.
- Select the material group service and entity set from your SAP source
- For Entity Name and Sub Entity Name, select the SAP OData entity from your SAP source
Note that this service supports full extraction, so Full transfer is the default selected option. You can also preview this dataset.
- When previewing the data, notice the language key. SAP passes all languages, so add a filter of
SPRAS = ‘E’
to only extract English. Note this uses the SAP internal value of the field. - Add a transform node to the canvas Change Schema transform after the
Material Group Text
.- Rename the material group field in target key to
matkl2
, so it is different than your first source. - Under Drop, select ;spras, odq_changemode, odq_entitycntr, dml_status, delta_token and glue_fetch_sq.
- Rename the material group field in target key to
- Add a join transform to your canvas, bringing together both source datasets.
- Ensure the node parents of both Material Master Attributes and Change Schema have been chosen
- Select the Join type of Left join
- Select the join conditions as the key fields from each source
- Under Material Master Attributes, select
matkl
- Under Change Schema, select
matkl2
- Under Material Master Attributes, select
You can preview the output to ensure the correct data is being returned. Now, you are ready to store the result.
- Add the S3 bucket target, to your canvas.
- Ensure the node parents is Join
- For format, select Parquet.
- For S3 Target Location, browse to the S3 bucket you created in the prerequisites and add
materialmaster/
to the S3 target location. - For the Data Catalog update options, select Create a table in the Data Catalog and on subsequent runs, update the schema and add new partitions.
- For Database, select the name of the AWS Glue database created earlier sapgluedatabase.
- For Table name, enter
materialmaster
.
- Choose Save to save your job. Your job should look like the following figure.
Clone your ETL job and make it incremental
After your ETL job has been created, it’s ready to clone and include incremental data handling using delta tokens.
To do this, you will need to modify the job script directly. You will modify the script to add a statement which retrieves the last delta token (to be stored on the job tag) and add the delta token value to the to the request (or execution of the job), which will enable the Delta Enabled SAP OData Service when retrieving the data on the next job run.
The first execution of the job will not have a delta token value on the tag; therefore, the call will be an initial run and the delta token will subsequently be stored in the tags for future executions.
- Go to the AWS Glue console.
- In the navigation pane under ETL Jobs choose Visual ETL.
- Select the Material Master Job, choose Actions and select Clone job.
- Change the name of the job to
Material Master Job Delta
, then choose the Script tab. - You need to add an additional python library that will take care of storing and retrieving the Delta Tokens for each job execution. To do this, navigate to the Job Details tab, scroll down and expand the Advanced Properties section. In the Python library path add the following path:
s3://aws-blogs-artifacts-public/artifacts/BDB-4789/sap_odata_state_management.zip
- Now choose the Script tab and choose Edit script on the top right corner. Choose Confirm to confirm that your job will be script-only.
Apply the following changes to the script to enable the delta token.
- 7. Import the SAP OData state management library classes you added in step 5 above, by adding the following code to row 8.
- The next few steps will retrieve and persist the delta token in the job tags so it can be accessed by the subsequent job execution. The delta token is added to the request back to the SAP source, so the incremental changes are extracted. If there is no token passed, the load will run as an initial load and the token will be persisted for the next run which will then be a delta load.To initialize the
sap_odata_state_management
library, extract the connection options into a variable and update them using the state manager. Do this by adding the following code to line 16 (after thejob.init
statement).
You can find the <key of MaterialMasterAttributes node>
and the <entityName for Material Attribute>
in the existing generated script under # Script generated for node Material Master Attributes
. Be sure to replace with the appropriate values.
- 9. Comment out the existing script generated for node
Material Master Attributes
by adding a#
, and add the following replacement snippet. - To extract the delta token from the dynamic frame and persist it in the job tags, add the following code snippet just above the last line in your script (before
job.commit()
)
This is what your final script should look like:
- Choose Save to save your changes.
- Choose Run to run your job. Note that there are currently no tags in your job details.
- Wait for your job run to be successfully completed. You can see the status on the Runs tab.
- After your job run is complete, you will notice on the Job Details tab that a tag has been added. The next job run will read this token and run a delta load.
Query your SAP data source data
The AWS Glue job run has created an entry in the Data Catalog enabling you to query the data immediately.
- Go to the Amazon Athena console.
- Choose Launch Query Editor.
- Make sure you have an appropriate workgroup assigned, or create a workgroup if required.
- Select the sapgluedatabase and run a query (such as the following) to start analyzing your data.
Clean up
To avoid incurring charges, clean up the resources used in this post from your AWS account, including the AWS Glue jobs, SAP OData connection, Glue Data Catalog entry, Secrets Manager secret, IAM role, the contents of the S3 bucket, and the S3 bucket.
Conclusion
In this post, we showed you how to create a serverless incremental data load process for multiple SAP data sources. The approach used AWS Glue to incrementally load the data from a SAP source using SAP ODP delta tokens and then load the data into Amazon S3.
The serverless nature of AWS Glue means that there is no infrastructure management, and you pay only for the resources consumed while your jobs are running (plus storage cost for outputs). As organizations increasingly become more data driven, this SAP connector can provide an efficient, cost effective, performant, secure way to include SAP source data in your big data and analytic outcomes. For more information see AWS Glue.
About the authors
Allison Quinn is a Sr. ANZ Analytics Specialist Solutions Architect for Data and AI based in Melbourne, Australia working closely with Financial Service customers in the region. Allison worked over 15 years with SAP products before concentrating her Analytics technical specialty on AWS native services. She’s very passionate about all things data, and democratizing so that customers of all types can drive business benefit.
Pavol is an Innovation Solution Architect at AWS, specializing in SAP cloud adoption across EMEA. With over 20 years of experience, he helps global customers migrate and optimize SAP systems on AWS. Pavol develops tailored strategies to transition SAP environments to the cloud, leveraging AWS’s agility, resiliency, and performance. He assists clients in modernizing their SAP landscapes using AWS’s AI/ML, data analytics, and application services to enhance intelligence, automation, and performance.
Partha Pratim Sanyal is a Software Development Engineer with AWS Glue in Vancouver, Canada, specializing in Data Integration, Analytics, and Connectivity. With extensive backend development expertise, he is dedicated to crafting impactful, customer-centric solutions. His work focuses on building features that empower users to effortlessly analyze and understand their data. Partha’s commitment to addressing complex user needs drives him to create intuitive and value-driven experiences that elevate data accessibility and insights for customers.
Diego is an experienced Enterprise Solutions Architect with over 20 years’ experience across SAP technologies, specializing in SAP innovation and data and analytics. He has worked both as partner and as a customer, giving him a complete perspective on what it takes to sell, implement, and run systems and organizations. He is passionate about technology and innovation, focusing on customer outcomes and delivering business value.
Luis Alberto Herrera Gomez is a Software Development Engineer with AWS Glue in Vancouver, specializing in backend engineering, microservices, and cloud computing. With 7-8 years of experience, including roles as a backend and full-stack developer for multiple startups before joining Amazon and AWS; Luis focuses on developing scalable and efficient cloud-based applications. His expertise in AWS technologies enables him to design high-performance systems that handle complex data processing tasks. Luis is passionate about leveraging cloud computing to solving challenging business problems.