Program! Check out the free webinar or register for educations. You can always look back via recorded webinars.

scroll

Creating SAP insights with Qlik, Snowflake and AWS.

technical blog about Qlik Cloud Data Integration

SAP Enterprise Resource Planning (ERP) contains valuable sales order, distribution, and financial data, but it can be challenging to access and integrate data into SAP systems with data from other sources to gain a complete picture of the end-to-end process.

How to create insights from SAP (ERP) with Qlik, Snowflake and AWS.

Order-to-cash is a critical business process for any organization, especially retail and manufacturing. It begins with booking a sales order (often on credit), followed by fulfilling that order, invoicing the customer, and finally managing accounts receivable for customer payments.

Fulfilling sales orders and invoicing can impact customer satisfaction, and accounts receivable and payments impact working capital and cash liquidity. As a result, the order-to-cash process is the lifeblood of the business and is critical to optimize.

SAP Enterprise Resource Planning (ERP) contains valuable sales order, distribution, and financial data, but it can be challenging to access and integrate data into SAP systems with data from other sources to gain a complete picture of the end-to-end process.

For example, understanding the impact of weather conditions on supply chain logistics can directly impact customer satisfaction and their willingness to pay on time. Organizational silos and data fragmentation can make it even harder to integrate with modern analytics projects. That in turn limits the value you get from your SAP data.

Order-to-cash is a process that requires active intelligence – a state of continuous intelligence that supports triggering immediate actions based on real-time, up-to-date data. Streamlining this analytics data pipeline typically requires complex data integrations and analytics that can take years to design and build – but it doesn’t have to be that way.

  • What if there was a way to combine the power of Amazon Web Services (AWS) and its artificial intelligence (AI) and machine learning (ML) engine with the computing power of Snowflake?
  • What if you could use a single Qlik software-as-a-service (SaaS) platform to drive automation of ingestion, transformation, and analysis for some of the most common SAP-focused business transformation initiatives?
  • What if suppliers and retailers/manufacturers could collaborate better by enabling mutual access to real-time data through Snowflake's data sharing and marketplace capabilities?

In this blog, we discuss Qlik Cloud Data Integration accelerators for SAP in conjunction with Snowflake and AWS.

Qlik Cloud Data Integration

Qlik Cloud Data Integration accelerators integrate with Snowflake to automate discovery, transformation, and analysis to solve some of the most common SAP problems. This enables users to derive business insights that drive decision making.

Qlik provides a single platform for extracting data from SAP and loads the data into Snowflake as the data repository. Qlik keeps the data in sync with its Change Data Ingestion (CDC) gateway, which powers the transformation engine, enabling Qlik to transform the raw SAP data into business-friendly data ready for analysis.

Qlik leverages its Qlik Cloud Analytics service on the SAP data to enable analysis and visualization, and to feed data into Amazon SageMaker to generate predictions with its artificial intelligence (AI) and machine learning (ML) engine.

Snowflake: the Data Collaboration Cloud

With AWS competencies in data and analytics, as well as machine learning, Snowflake has reinvented the data cloud to meet today’s digital transformation needs. Organizations across industries use Snowflake to centralize, manage, collaborate on data, and generate actionable insights.

Here are the top reasons organizations trust Snowflake with their data:

  • Snowflake is a cloud- and region-independent data cloud. For example, if a customer’s SAP data is hosted on AWS, they can provision Snowflake on AWS and use AWS PrivateLink for secure and direct connectivity between SAP, AWS services, and Snowflake.
  • Separation of compute and storage allows users to apply granular controls and isolation, as well as role-based access control (RBAC) policies for different types of workloads. This means that extract, transform, load (ETL) tasks can have isolated computation from critical business intelligence (BI) reports versus feature engineering for machine learning, and users have control over how much computation is devoted to each of these workloads.
  • A thriving data marketplace to enrich customer data with third-party offerings. The marketplace enables secure data sharing internally and externally, without creating duplicate copies of data.
  • A strong tech partner ecosystem offering best-of-breed products in every data category: data integration, data governance, BI, data observability, AI/ML.
  • • Ability to bring code to data, instead of exporting/moving data to separate processing systems, via Snowpark. Code in Java, Scala or Python is stored in Snowflake.

Joint Solution Overview

Let’s dive into an SAP business use case that all companies share – orders to pay. This process in SAP usually involves the sales and distribution model, but we’ve added accounts payable to complete the story.

Figure 2 – SAP accelerators architecture for the order-to-cash process
Figure 2 – SAP accelerators architecture for the order-to-cash process



The SAP accelerators use pre-built logic to transform raw SAP data into business-oriented analytics. It starts with getting the data from SAP; you can deploy and install Qlik Data Gateway - Data Movement near the SAP system to pull the data from SAP and put it into Snowflake, without impacting the performance of the SAP production system.

For the SAP accelerators, we used the SAP extractors as the basis for the data layer. This pre-transformed data allows us to use smarter methods to extract data from SAP. For the order-to-cash use case, we need 28 extractors, which would be more than 200+ tables if we were to go directly to the underlying SAP structure.

Using a single endpoint, we pull the data from SAP into Snowflake and partition the actual data by use case, but use a common set of dimensions to feed the scenarios. Below is how this architecture looks conceptually.

Figure 3 – QCDI data transformation process.
Figure 3 – QCDI data transformation process.



This process enables easy future addition of new SAP use cases.

With our single endpoint, we can load data into our landing and storage areas simultaneously. The data is only landed once, and Qlik uses as many views as possible to avoid data replication within Snowflake.

Now we have two different dimensional loads and some dimensions are not CDC enabled (called Delta). Those are reloaded on a schema and merged with the Delta dimension in a transformation layer, which presents a single set of entities for the construction of the data mart layer.

Let’s take a look at the process for orders to cash. We land and store the data in Snowflake, and in the landing layer we add the rules that convert the names and columns from the SAP extractors to user-friendly names.

Figure 4 – QCDI SAP rules and metadata engine.
Figure 4 – QCDI SAP rules and metadata engine.

You might notice that there are a lot of rules. We ran a report in SAP to extract all the metadata via extractors, but not all the names are the same. For example, KUNNR is ship-to-customer in one extractor and sold-to-customer in another.

Each extractor has its own definition and we used Qlik Sense to create a metadata dictionary that we can apply in the user interface (UI).

Figure 5 – QCDI SAP transformations.
Figure 5 – QCDI SAP transformations.

As you can see, several important things are happening at the same time. Within this no-code UI, we have about 80+ extractors routed to a landing area, renamed them, and added views with friendly names in the store layer.

This is important because many SAP solutions require flows or coding for each extractor or table to be maintained as an individual piece of code, but within Qlik everything is managed simultaneously via the SaaS UI (no coding required).

Once the data is properly indexed and renamed, we start running our transformation layers. This process combines the dimension into a single entity and creates the business specific process for a use case such as orders to cash.

The transformation layer is where we will manipulate the data using 100% pushdown SQL to Snowflake. Some examples of transformations include currency pivoting, descriptive text flattening, and other SQL manipulations.

In addition to the SQL manipulations, a Python Snowpark stored procedure was also created in Snowflake and called via the Qlik SQL pushdown. This demonstrates how engineers familiar with the Python language can build transformation steps as a stored procedure in Snowflake and access it via Qlik.

Once the data is fully mapped, transformed, and prepared, we create the final state data mart layer. Qlik Cloud Data Integration flattens the dimensional flakes into a true star schema ready for analysis, and these star schemas are consolidated under a single data mart layer per business process.

Figure 6 – SAP order-to-cash data marts by subject area.
Figure 6 – SAP order-to-cash data marts by subject area.


Our data layer is now complete. We’ve used the Qlik Cloud Data Integration SaaS platform to load, store, transform, and deliver analytics-ready data stacks to power our Qlik SaaS analytics engine.

The SAP accelerators come with modules for order-to-cash, inventory management, financial analysis and procure-to-pay.

Figure 7 – QCDI process flow for the SAP accelerators.
Figure 7 – QCDI process flow for the SAP accelerators.

SAP from Raw to Ready Analytics

With the data preparation done, we can now add the power of Qlik SaaS analytics. We pull all the star schemas from the data mart layer into Qlik, creating a semantic layer on top of the pure SQL data.

Qlik’s associative engine is used to tie all parts of the order-to-cash module together into a cohesive, in-memory model. We also add master measures and online analytical processing (OLAP)-like complex set analysis calculations to create dynamic data entities, such as rolling dates or complex calculations such as days sales outstanding.

Here is what the refined analysis model looks like in Qlik SaaS. Notice that there are multiple fact tables (10) that share that common set of dimensions.

Figure 8 – Qlik SaaS data model.
Figure 8 – Qlik SaaS data model.


Having access to all that data allows us to see the bigger picture around the order to payment process in SAP.

Figure 9 – Order-to-cash Qlik application.
Figure 9 – Order-to-cash Qlik application.


SAP Order-to-Cash Analytics

The business question of how an order goes from the product being ordered, to when it is shipped, to when it is invoiced, to when the customer has paid, is what the order-to-cash module is designed to answer. Let's look at an order that a customer has placed. That order (5907) was initially placed on 17/06/1999 and payment was completed on 12/12/1999. That's a days sales outstanding (DSO) of 194 days! That would be the end of the question when using a simple SQL-based query tool, but with the Qlik associative model we can see what happened.

Figure 10 – Visual of the order-to-cash process in Qlik from SAP.
Figure 10 – Visual of the order-to-cash process in Qlik from SAP.


There was not enough material in stock to ship the entire order, so it was split into three separate shipments and paid for on three documents. Technically the DSO (Days Sales Outstanding) was 194 days, but from invoice to payment it was only 187 days. However, this is not the whole story. The customer paid within 1-2 days of invoicing. These details would not have surfaced without the use of the Qlik analytics engine.

Even in this use case, we’re only looking back at what happened. But what if we look forward and identify trends? For example, if certain parts are out of stock, we can’t ship everything at once. With Amazon SageMaker, we can predict what issues and delays we can expect.

Using the SAP accelerators, we created plug-and-play templates to ask the hard questions of SAP data, with Snowflake and AWS as the engines that power the insights with the Qlik SaaS platform.

Predicting the Future with Amazon SageMaker

One of the more powerful components of the Qlik SaaS architecture is the ability to integrate the data in the Qlik application with the Amazon SageMaker engine. In our order-to-cash use case, we took sample data and trained a SageMaker model to predict late versus on-time orders.

A quick way to achieve this is to use the Snowpark API to perform feature engineering on the dataset, before finally bringing the data in for training and deployment to a SageMaker endpoint. Then for inference we can use Qlik to access the endpoint and present predictions directly in the dashboard.

How does this work with analytics? Within Qlik SaaS we can connect to Amazon SageMaker to pass data from the Qlik engine to an endpoint that will make the aforementioned predictions based on the SAP data.

When the data is reloaded into the Qlik analytics engine, it feeds data from relevant in-memory tables as data frames into the SageMaker endpoint where the AI/ML prediction will be calculated. The predictions are passed back to the Qlik app and cached along with the original data, and are available for the visualization layer to present.

Figure 11 – Amazon SageMaker and Qlik SaaS integration.
Figure 11 – Amazon SageMaker and Qlik SaaS integration.


We have now completed the cycle of taking historical data from SAP, processing it and transforming it into analytics-ready data using Qlik Cloud Data Integration. We have also presented this refined data using Qlik Cloud Analytics and predicted future outcomes using Amazon SageMaker – all running on the Snowflake data cloud.

Conclusion

In a typical order management cycle, information sharing between organizations has become crucial for the successful operation of modern enterprises. Improved customer satisfaction, increased competitiveness, and the reduction of supply chain bottlenecks and days of sales outstanding are key indicators of optimized cash flow for the company.

In this post, we discuss how the Qlik Cloud Data Integration SAP accelerator solution, in partnership with Snowflake and AWS, can accelerate your SAP data modernization, enable greater agility and collaboration across organizations, and quickly deliver business solutions through optimized order-to-cash business insights.

To learn more, please contact our sales department to realize the full potential of your SAP data.

Download

Download Blog

Download the full blog here

whitepaper Victa
Name

Please enter your full name.

Please enter your email so we can get in touch.

This form is protected by reCAPTCHA, the privacy policy and the terms of service from Google apply.

Did you know that your browser is outdated?

To get the best possible user experience of our website, we recommend that you upgrade your browser to a newer version or a different browser. Click on the upgrade button to go to the download page.

Upgrade your browser here
Proceed at your own risk