Program! Check the free webinars or register for Courses You can always look back via recorded webinars

Scroll

Creating SAP insights with Qlik, Snowflake and AWS.

technische blog over Qlik Cloud Data Integratie

SAP ERP (enterprise resource planning) contains valuable sales order, distribution and financial data, but it can be challenging to access and integrate data in SAP systems with data from other sources to get a complete picture of the end-to-end process.

How to create insights from SAP (ERP) with Qlik, Snowflake and AWS.

Order-to-cash is a critical business process for any organisation, especially retail and manufacturing companies. It starts with booking a sales order (often on credit), followed by fulfilling that order, invoicing the customer and finally managing accounts receivable for customer payments.

Fulfilling sales orders and invoicing can affect customer satisfaction, and accounts receivable and payments affect working capital and cash liquidity. As a result, the order-to-cash process is the lifeblood of the business and is critical to optimise.

SAP ERP (enterprise resource planning) contains valuable sales order, distribution and financial data, but it can be challenging to access and integrate data in SAP systems with data from other sources to get a complete picture of the end-to-end process.

For example, understanding the impact of weather conditions on supply chain logistics can directly affect customer satisfaction and their willingness to pay on time. Organisational silos and data fragmentation can make it even harder to integrate with modern analytics projects. That, in turn, limits the value you get from your SAP data.

Order-to-cash is a process that requires active intelligence - a state of continuous intelligence that supports triggering immediate actions based on real-time, up-to-date data. Streamlining this analytical data pipeline typically requires complex data integrations and analytics that can take years to design and build - but it doesn't have to be this way....

  • What if there was a way to combine the power of Amazon Web Services (AWS) and its artificial intelligence (AI) and machine learning (ML) engine with the computing power of Snowflake?
  • What if you could use a single Qlik software-as-a-service (SaaS) platform to automate ingestion, transformation and analytics for some of the most common SAP-focused business transformation initiatives?
  • What if suppliers and retailers/manufacturers could collaborate better by enabling mutual access to real-time data through Snowflake's data sharing and marketplace capabilities?

In this blog, we discuss Qlik Cloud Data Integration accelerators for SAP in conjunction with Snowflake and AWS.

Qlik Cloud Data Integration

Qlik Cloud Data Integration accelerators integrate with Snowflake to automate discovery, transformation and analysis to solve some of the most common SAP problems. This enables users to derive business insights that can drive decision-making....

Qlik provides a single platform for extracting data from SAP and loads the data into Snowflake as a data repository. Qlik keeps the data synchronised with its change data capture gateway (CDC) that powers the transformation engine, allowing Qlik to convert the raw SAP data into business-friendly data ready for analysis.

Qlik uses its Qlik Cloud Analytics service on the SAP data to enable analysis and visualisation, and to feed data into Amazon SageMaker to generate predictions with its artificial intelligence (AI) and machine learning (ML) engine.

Snowflake: the Data Collaboration Cloud

Snowflake has AWS competencies in data and analytics, as well as machine learning, and has reinvented the data cloud to meet today's digital transformation needs. Organisations across industries are using Snowflake to centralise, manage, collaborate on data and generate actionable insights.

Here are the main reasons why organisations trust Snowflake with their data:

  • Snowflake is a cloud- and region-independent data cloud. For example, if a customer's SAP data is hosted on AWS, they can provision Snowflake on AWS and use AWS PrivateLink for secure and direct connectivity between SAP, AWS services and Snowflake.
  • Separation of compute and storage allows users to apply granular controls and isolation, as well as role-based access management (RBAC) policies for different types of workloads. This means extract, transform, load (ETL) tasks can have isolated computation versus critical business intelligence (BI) reports versus feature engineering for machine learning, and users have control over how much computation is spent on each of these workloads.
  • A thriving data marketplace to enrich customer data with third-party offers. The marketplace enables secure data sharing internally and externally, without duplicate copies of data.
  • A strong tech-partner ecosystem offering best-of-breed products in every data category: data integration, data governance, BI, data observability, AI/ML.
  • Ability to bring code to data, instead of exporting/moving data to separate processing systems, via Snowpark. Code in Java, Scala or Python is stored in Snowflake.

Joint Solution Overview

Let's delve into an SAP business use case that all companies share - orders to payment. This process in SAP usually includes the sales and distribution model, but we have added accounts payable to complete the story.

Figure 2 – SAP accelerators architecture voor het order-to-cash process
Figure 2 – SAP accelerators architecture for the order-to-cash process


SAP accelerators use pre-built logic to turn raw SAP data into business-oriented analytics. It starts with getting the data from SAP; you can deploy and install Qlik Data Gateway - Data Movement near the SAP system to extract the data from SAP and put it into Snowflake, without affecting the performance of the SAP production system.

For the SAP accelerators, we used the SAP extractors as the basis for the data layer. This pre-transformed data enables us to use smarter methods to extract data from SAP. For the order-to-cash usage scenario, we need 28 extractors, which would be over 200+ tables if we went directly to the underlying SAP structure.

Using a single endpoint, we extract the data from SAP to Snowflake and partition the actual data by usage scenario, but we use a common set of dimensions to feed the scenarios. Below is what this architecture looks like conceptually.

Figure 3 – QCDI data transformation process.
Figure 3 – QCDI data transformation process.

This process allows easy future addition of new SAP use cases.

With our single endpoint, we can simultaneously load data into our landing and storage areas. Data is only landed once and Qlik uses as many views as possible to avoid data replication within Snowflake.

Now we have two different dimensional loads and some dimensions are not CDC-enabled (called Delta). Those are reloaded on a schema and merged with the Delta dimension in a transformation layer, which presents a single set of entity for constructing the data mart layer.

Let's look at the process for orders to cash. We land and store the data in Snowflake, and in the landing layer we add the rules that transform the names and columns of the SAP extractors into user-friendly names.

Figure 4 – QCDI SAP rules en metadata engine.
Figure 4 – QCDI SAP rules and metadata engine.

You might notice that there are many rules. We ran a report in SAP to extract all metadata through extractors, but not all names are the same. For example, KUNNR is ship-to-customer in one extractor and sold-to-customer in another.

Each extractor has its own definition and we used Qlik Sense to create a metadata dictionary that we can apply in the user interface (UI).

Figure 5 – QCDI SAP transformations.
Figure 5 – QCDI SAP transformations.

As you can see, several important things are happening simultaneously. Within this no-code UI, we routed around 80+ extractors to a landing area, renamed them and added views with friendly names in the store layer.

This is important because many SAP solutions require flows or coding for each extractor or table as an individual piece of code to maintain, but within Qlik everything is managed simultaneously through the SaaS UI (no coding required).

Once the data is correctly indexed and renamed, we start running our transformation layers. This process combines the dimension into one entity and creates the business-specific process for a use case such as orders to cash.

The transformation layer is where we start manipulating the data with 100% pushdown SQL to Snowflake. Some examples of transformations include currency rotation, descriptive text smoothing and other SQL manipulations.

Besides the SQL manipulations, a Python Snowpark-stored procedure was also created in Snowflake and invoked via the Qlik SQL pushdown. This shows how engineers familiar with the Python language can build transformation steps as a stored procedure in Snowflake and access it via Qlik.


Once the data is fully mapped, transformed and prepared, we create the final state data mart layer. Qlik Cloud Data Integration flattens the dimensional flakes into a true star schema ready for analysis, and these star schemas are consolidated under a single data mart layer per business process.

Figure 6 – SAP order-to-cash data marts by subject area.
Figure 6 – SAP order-to-cash data marts by subject area.

Our data layer is now complete. We used the Qlik Cloud Data Integration SaaS platform to load, store, transform and deliver data marts ready for analysis to power our Qlik SaaS analytics engine.

The SAP accelerators come with modules for order-to-cash, inventory management, financial analysis and procure-to-pay.

Figure 7 – QCDI process flow voor de SAP accelerators.
Figure 7 – QCDI process flow for the SAP accelerators.

SAP from Raw to Ready Analytics

With data preparation completed, we can now add the power of Qlik SaaS analytics. We pull all the star schemas from the datamart layer into Qlik and create a semantic layer on top of the pure SQL data.

Qlik's associative engine is used to merge all the components of the order-to-cash module into a coherent in-memory model. We also add master measures and online analytical processing (OLAP)-like complex set analysis calculations to create dynamic data entities such as rolling dates or complex calculations such as days sales outstanding.

Here’s what the refined analytics model looks like in Qlik SaaS. Notice how there are multiple fact tables (10) sharing that common set of dimensions.

Figure 8 – Qlik SaaS data model.
Figure 8 – Qlik SaaS data model.


Having access to all that data allows us to see the bigger picture around the process from orders to payment in SAP.

Figure 9 – Order-to-cash Qlik application.
Figure 9 – Order-to-cash Qlik application.

SAP Order-to-Cash Analytics

The business question of how an order progresses from the product ordered, to when it is shipped, to when it is invoiced, to when the customer has paid, is what the order-to-cash module is designed to answer. Let's look at an order placed by a customer. That order (5907) was initially placed on 17-06-1999 and payment was completed on 12-12-1999. That is a days sales outstanding (DSO) of 194 days! That would be the end of the question when using a simple SQL-based query tool, but with the Qlik associative model, we can see what happened.

Figure 10 – Visual of the order-to-cash process in Qlik from SAP.
Figure 10 – Visual of the order-to-cash process in Qlik from SAP.

There was not enough material in stock to ship the entire order, so it was split into three separate shipments and paid through three documents. Technically, the DSO (Days Sales Outstanding) was 194 days, but from invoice to payment it was only 187 days. However, this is not the whole story. In fact, the customer paid within 1-2 days of invoicing. These details would not have emerged without using the Qlik analytics engine.


Even in this usage scenario, we only look back at what happened. But what if we look ahead and identify trends? For example, if certain parts are out of stock, we cannot ship everything at once. With Amazon SageMaker, we can predict what problems and delays we can expect.

With SAP Accelerators, we have created plug-and-play templates to ask the tough questions on SAP data, with Snowflake and AWS as the engines driving the insights with the Qlik SaaS platform.


Predicting the Future with Amazon SageMaker

One of the more powerful components of the Qlik SaaS architecture is the ability to integrate the data in the Qlik application with the Amazon SageMaker engine. In our order-to-cash use case, we took sample data and trained a SageMaker model to make predictions about late versus on-time orders.

A quick way to achieve this is by using the Snowpark API to perform feature engineering on the dataset, before the data is finally brought in for training and deployment to a SageMaker endpoint. Then, for inference, we can use Qlik to access the endpoint and present predictions directly in the dashboard.

How does this work with analytics? Within Qlik SaaS, we can connect to Amazon SageMaker to pass data from the Qlik engine to an endpoint that will make the aforementioned predictions based on the SAP data.

When the data is reloaded into the Qlik analytics engine, it feeds data from relevant in-memory tables as data frames into the SageMaker endpoint where the AI/ML prediction will be calculated. The predictions are returned to the Qlik app and cached along with the original data, and are available for the visualisation layer to present.

Figure 11 – Amazon SageMaker en Qlik SaaS integration.
Figure 11 – Amazon SageMaker and Qlik SaaS integration.


We have now completed the cycle of collecting historical data from SAP and editing and converting it into data ready for analysis using Qlik Cloud Data Integration. We have also presented this refined data with Qlik Cloud Analytics and forecast future results with Amazon SageMaker - all running on the Snowflake data cloud.

Conclusion

In a typical order management cycle, information sharing between organisations has become critical to the successful operation of modern enterprises. Improved customer satisfaction, increased competitiveness and the reduction of supply chain bottlenecks and days sales outstanding are key indicators of optimised cash flow for the business.


In this post, we discuss how the Qlik Cloud Data Integration SAP accelerator solution, in partnership with Snowflake and AWS, can accelerate your SAP data modernisation, enable greater agility and collaboration across organisations and rapidly deliver business solutions through optimised order-to-cash business insights.

To find out more, contact our sales department to realise the full potential of your SAP data.
Download

Download Blog

Download the complete blog here

whitepaper Victa
Naam

Please enter your full name.

Please enter your email so we can get in touch.

This form is protected by reCAPTCHA, the privacypolicy and the terms of service of Google apply.

Wist u dat uw browser verouderd is?

Om de best mogelijke gebruikerservaring van onze website te krijgen raden wij u aan om uw browser te upgraden naar een nieuwere versie of een andere browser. Klik op de upgrade button om naar de download pagina te gaan.

Upgrade hier uw browser
Ga verder op eigen risico