Tuesday, January 17, 2017

Universal Data Adapter for SAP HANA Data Source - Part 3

Time to finish this blog series for using the UDA with SAP HANA data source. In the previous parts I covered both introduction and configuration in ODI:
In this last part we will go through the configuration in FDMEE web interface, and of course, we will extract our data from HANA.

My Source Table: FDMEE_DATA_T
Just to show you how my source data looks like:
I will be extracting data from a column-based table. I haven't tested with Virtual Tables yet so my test just shows how to extract data from standard tables.

FDMEE Artifacts
A basic configuration to make the UDA working includes:
  • Source System
  • Source Adapter
  • Source Period Mappings
  • Import Format
  • Location and Data Load Rule
  • Dummy Mappings (just to make it working)

Source System
I have registered the source system with the new context I created for HANA:

Source Adapter
Once the source adapter is created for my HANA table (FDMEE_DATA_T), we need to import the table definition into FDMEE. This is actually performing a reverse in ODI so we get the data store with its columns:
We are now able to see the columns in the source adapter. Next step is to classify Amount and Year/Period (in case we have them) columns. In my case, we do have columns for Year and Period so we can filter out data when running the extract:
I will keep display names as the column names but remember that these are the column names you will see in your import format. Therefore, I'd change them if columns have technical names.
I haven't defined any source parameter for my test but this would be done in the same as any other UDA configuration.

Once the table definition is imported and parameters are created, we have to generate the template package. This step will create the ODI interface and package:
Now we are done with the source adapter setup.

Source Period Mappings
You need to configure calendars if you are going to filter by source period and year:
If your table or view has current period data, you will be probably fine with setting period mapping type in the data load rule to None. In my case, I just created one source period map for January-2016.

Import Format
Easy one-to-one map:
After configuring the import format, we have to generate the ODI scenario that will be executed when we run the data load rule. To generate the scenario, the source adapter needs to be configured first so the table definition is imported and the ODI interface/package are successfully generated:

Location and Data Load Rule
I have created one location with one data load rule which uses source period mappings previously defined:

Running the Data Load Rule
VoilĂ ! Here is my HANA data extracted:

Giving a try with standard View: FDMEE_DATA_V
I have also tested the data extract from a standard HANA View. Configuration steps are the same so I'm not going to replicate them. Just to show you that it works. My view is filtering only accounts starting with 1:

In case you get errors...
It may happen that you get errors when generating the ODI Scenario. In that case, I would suggest to raise a SR with Oracle as we identified some issues when working with case sensitiveness enabled.
You may get a different error but the one below shows in the FDMEE server log that the generation failed due to fatal errors in the ODI interface:

And this completes my trip around SAP HANA world. I hope you enjoyed and helped you to avoid some headaches.

1 comment:

  1. Nice and informative blog! Your blog provides information about many new and innovative technologies. Thanks for sharing the blog and providing a information about something new.

    ReplyDelete