Friday, December 20, 2019

Building your own Monitoring Solution for Data Changes in HFM Phase Submission Loads - Part 1

Back to blog!

From time to time I get some cool requirements from customers. That makes me think and do what I like to do: analyze, design, build and make everyone happy :-)

In this case, I got a challenging requirement for an automated integration between several source systems and HFM. Of course, using our favorite fishing tool, FDMEE. Yes, I know, everybody is now thinking about Cloud and more Cloud. However, FDMEE is still a very strong EPM integration tool supporting Hybrid and survives in 11.2.

I won't go into much detail so I can keep it simple.

The customer had already all automation built using open batch files and different direct data extracts. However, there was no email notification solution implemented. It was hard for them to get results without accessing FDMEE.

We started to gather the requirements and we came up with the following list:
  • Bypass missing mappings and send summary by email
  • Bypass invalid HFM intersections and send the Intersection Check report by email (html format)
  • Identify the CANNOT WRITE intersections (the blue ones) and include them in the email body
  • Show Process Management information about the HFM Process Units where data cannot be written. Basically, show Review Level so the users are aware of what is happening
  • Build a data-tracking solution for HFM Phased Submission
The goal is to cover all requirements in future blog posts but today I will start with the key one, the data-tracking. Personally, I think this would be a great functionality for Cloud Data Management as well. It fills a gap in the data audit in FDMEE (you can see how cell data changes in HFM)

Requirement - Data Tracking for HFM Phased Submission
This is the requirement word by word:
To maintain control over the phased close process, a data tracking service is required to track data changes impacting Gross Margin and EBITDA from WD2 and WD4 respectively.  
We need the following key controls:
  • The phased submission process should allow for a soft Gross Margin and EBITDA close whilst tracking changes to the underlying data impacting Gross Margin and EBITDA from WD2 and WD4 respectively
  • The data load monitoring service must be relevant i.e. we DO NOT want to track all financial statement data changes and report them to users
  • Appropriate stakeholders must be notified of the data changes once the deadlines have been passed
  • This process must not disrupt the source trial balance data load to HFM i.e. the trial balance loaded must be complete and accurate
  • Sufficient provision should be built into the design to switch over to a hard WD2/WD4 close if desired
Is it clear? I know it is not a conventional solution but it will help establish the control that is desired over the phased close process without "hard" locking components of the income statement. This is what the customer wants to achieve.

Let's give some shape to these words as we want to keep everything simple. HFM will be using phased submissions to manage the data flow with review levels acting as a trigger event for data tracking.
  • Review Level 1 (RL1): Both the automated and manual FDMEE load processes are running and exporting data to HFM.
  • Review Level 2 (RL2) Soft Close - the promotion to RL2 will now act as a trigger to activate the data monitoring script that will track data changes when data is exported to HFM.
  • Review Level 3 (RL3) Hard Closeat RL3 the FDMEE Global user system account no longer has access to load data to HFM.
Phased Submissions at a glance
The term phased submissions in HFM is synonymous with managing data through stages of the close cycle. In a typical organization this would involve:
  • Phase 1 – Promote inter-company data
  • Phase 2 – Promote Trial Balance data
  • Phase 3 – Promote Balance Sheet movement data/cash-flow
  • Phase 4 – Promote disclosure notes 
The process encourages a phased hard close and is designed for locking data submissions defined by accounting processes (i.e when Phase 1 is promoted to Review Level 3, inter-company data cannot be changed by the trial balance loads in Phase 2)

Our Customer
The following tables show the different phases and details: 

WD2
WD4
Phase 1
Phase 2
Promote Gross Margin data
Promote EBITDA data
Soft close - RL2
Soft close - RL2
Enable data monitoring on accounts that impact Gross Margin
Enable data monitoring on accounts that impact EBITDA
Automated notifications informing users of changes to Gross Margin data
Automated notifications informing users of changes to EBITDA data


WD6

Phase 3
Phase 4
Promote Trial Balance data
Promote USGAAP/Disclosure Notes
Hard close – RL3
Hard close – RL3
Lock trial balance data
Lock USGAAP adjustments and changes to disclosure notes
Stop all email notifications
Journals/Offline (excel) submissions

Using Custom HFM dimension to Link Accounts and Phases
A conscious decision was made to try and avoid tagging submission groups on the account dimension as invariably accounts end up being incorrectly tagged (or Group Finance change their mind!) which results in the need for an application metadata update to correct the submission group assignment (used to profile phased submissions).

Instead a spare custom dimension was leveraged, in this case Custom6 (Custom5 is used to evaluate constant currency) with the intention of using FDMEE and SQL mapping scripts to load data to each of the Custom6 dimension members respectively. This means that we can now adjust the FDMEE mapping for phased submission purposes without the need to apply HFM metadata updates.
Please, note that this approach will only work dynamically where the chart of accounts adheres to a rational numbering sequence e.g. Revenue account are 1* series, Cost of Sales accounts are 2* series and OPEX accounts are 3* series etc. Other approaches are also valid.

Translating the Requirements for the Integration Guy
Although solution seems to be complex, design is the key. Basically, we need FDMEE to:
  • Get the HFM Process Management details (Review Level for the different phases)
  • When Soft Close, be able to compare new data set against last data set loaded. That will monitor all data load changes
  • Auto-map missing mappings so the workflow can be completed
  • Auto-map for invalid HFM intersections and identify the CANNOT WRITE intersections (Soft Close)
  • Keep audit in FDMEE so missing mappings and invalid intersections can be easily filtered in the Data Load Workbench
  • Send email notification with detailed results 
In the next chapter we will go through the solution design in FDMEE, and then we will continue with some examples where you can see how it works and the outcome of this data monitoring solution.

Merry Xmas!

Tuesday, May 28, 2019

Code Snippet: Getting ODI Details for Source/Target System

There are different scenarios where you might need to interact with your source system. For example, we want to perform a delta extract. For that purpose, we have a timestamp column in our source table/view. In our solution, we need to execute a query against the source system to get only the entities which have data generated after specific timestamp.

Also, you may want to export your data to a target table. You will need to connect and execute an insert statement.

For source systems, the connection details are stored in the ODI tables of the FDMEE database. Basically, what you setup in ODI topology. If using a relational database as a target system, you may want to store the connection details in ODI to avoid hard-coding and making it more dynamic.

This version uses the current context you setup in the source system page. We also wanted to have to logical schema as a parameter

Let's have a look!

SQL query to get the ODI details for specific source system:


def get_odi_source_details(fdmAPI, fdmContext, sourceSystemName, logicalSchema):
    '''
     Snippet:      Get ODI Source details for Source System Name 
     Author:       Francisco Amores
     Date:         21/05/2019
           
     Parameters:   
                   - fdmAPI: FDMEE API object
                   - fdmContext: FDM Context object
                   - sourceSystemName: Source System name 
                   - logicalSchema: ODI Logical Schema
                   
     Notes:         This snippet can be pasted in any event script. The function
                    returns a map object with the different properties
     
     FDMEE Version: 11.1.2.3 and later
               
     ----------------------------------------------------------------------
     Change:
     Author:
     Date:
    '''
    
    # *******************************************
    # Import section
    # *******************************************
    from java.sql import SQLException
    
       
    # *******************************************
    # Get ODI Details for Source System
    # *******************************************
    
    # log
    logMsg = "Getting ODI details for Source System %s" % (sourceSystemName)
    fdmAPI.logInfo(logMsg)
    
    sqlOdiDetails = """SELECT
                        S.SOURCE_SYSTEM_NAME,
                        C.CONTEXT_CODE AS ODI_CONTEXT,
                        CO.CON_NAME AS DATA_SERVER_NAME,
                        L.LSCHEMA_NAME AS LOGICAL_SCHEMA,
                        P.SCHEMA_NAME AS PHYSICAL_SCHEMA,
                        TXT.FULL_TXT AS JAVA_URL,
                        CO.JAVA_DRIVER,
                        CO.USER_NAME,
                        CO.PASS AS ENCRYPTED_PWD
                    FROM 
                        AIF_SOURCE_SYSTEMS S INNER JOIN SNP_CONTEXT C
                            ON S.ODI_CONTEXT_CODE = C.CONTEXT_CODE
                        INNER JOIN SNP_LSCHEMA L
                            ON L.LSCHEMA_NAME = ?
                        INNER JOIN SNP_PSCHEMA_CONT PC
                            ON PC.I_CONTEXT = C.I_CONTEXT AND
                               PC.I_LSCHEMA = L.I_LSCHEMA
                        INNER JOIN SNP_PSCHEMA P
                            ON P.I_PSCHEMA = PC.I_PSCHEMA
                        INNER JOIN SNP_CONNECT CO
                            ON P.I_CONNECT = CO.I_CONNECT
                        INNER JOIN SNP_MTXT TXT
                            ON CO.I_TXT_JAVA_URL = TXT.I_TXT
                        LEFT OUTER JOIN SNP_CONNECT_PROP CP
                            ON CP.I_CONNECT = CO.I_CONNECT
                    WHERE S.SOURCE_SYSTEM_NAME = ?"""
    
    # params
    params = [logicalSchema, sourceSystemName]
    
    try:
        # execute SQL query
        rsOdiDetails = fdmAPI.executeQuery(sqlOdiDetails, params)
        
        # initialize map
        mapOdiDetails = {}
        
        # loop
        if rsOdiDetails.isBeforeFirst():
            while rsOdiDetails.next():
                # get ODI details
                mapOdiDetails["ODI_CONTEXT"] = rsOdiDetails.getString("ODI_CONTEXT")
                mapOdiDetails["DATA_SERVER_NAME"]= rsOdiDetails.getString("DATA_SERVER_NAME")
                mapOdiDetails["PHYSICAL_SCHEMA"]= rsOdiDetails.getString("PHYSICAL_SCHEMA")
                mapOdiDetails["JAVA_URL"] = rsOdiDetails.getString("JAVA_URL")
                mapOdiDetails["JAVA_DRIVER"] = rsOdiDetails.getString("JAVA_DRIVER")
                mapOdiDetails["USER_NAME"] = rsOdiDetails.getString("USER_NAME")
                mapOdiDetails["ENCRYPTED_PWD"] = rsOdiDetails.getString("ENCRYPTED_PWD")
               
                # log
                fdmAPI.logInfo("ODI Details: %s" % mapOdiDetails)
        else:
            # ODI Details not found                        
            errMsg = "ODI Details not found for Source System name %s (Logical Schema %s)" % sourceSystemName, logicalSchema
            fdmAPI.logInfo(errMsg)
            raise RuntimeError(errMsg)        
        # close rs
        fdmAPI.closeResultSet(rsOdiDetails)
    except SQLException, ex:
        errMsg = "Error executing the SQL Statement: %s" % ex
        raise RuntimeError(errMsg)
        
    # return
    return mapOdiDetails

Code snippets for FDMEE can be downloaded from GitHub.

Monday, April 29, 2019

Kscope19 - there we go!

Every year gets more and more complicate to get a slot in this amazing event.
I'm more than happy to be there again. 

Sharing knowledge is something I really enjoy, so I will try to do my best as always.
This is the summary for you:
We all know that the administration guides are a good starting point to learn how to use a product. But, is everything written in them? Dark arts are not taught in books. I have been writing my own magic potions for many years, and it is a pleasure to be able to show you some dark magic in FDMEE.

The cloud is already a reality which increasingly takes center stage in our data integrations solutions. That's why we won’t forget about it, and we will share the best tips and tricks with you. You will also see new Data Integration SUI!

If you would like to discover the best spells for your data integration requirements, this is undoubtedly your platform 9¾. We wait for you!



Looking forward to seeing you at Seattle!

Wednesday, April 17, 2019

Data Protection with Multiple Global Application Users

Dear colleagues,
I'm back! It has been a hard few months with a lot of work especially at home where the large family has required 200% of my attention.

Many people asked me where I was, although those who know me know that I have not been to the Caribbean :-)

Being said that...
We all know that the Cloud continues to grow with great force and that the role of data integration is fundamental in the architecture of any solution. Some other people including my colleague John Goodwin, have covered many topics of maximum interest. I strongly recommend visiting the different blogs out there although I'm sure you already did it :-)

Today, I come back to show you a solution that we have been implementing in multiple customers. How many of you have had data protection problems in HFM when loading from FDMEE? Do you know all the solutions? I'm not going to cover all of them, but I will be introducing the one that is not fully documented.

As usually, I'm not stating this is the best solution for your requirement. This is just to share with you a new idea that I found very useful for some of my implementations.

The requirement
Let's start with a common question from customers:
When we load HFM data with FDMEE in Replace mode, some accounts are wiped out. Controllers type them, so we need to protect that data. Can we?
Then, you start thinking about different approaches.
  • Maybe, we can use Merge mode instead...
  • Or Replace by Security...
  • FDMEE has built-in functionality for Data Protection...
  • Etc.
During your analysis, all options should be evaluated. You need to understand the pros and cons and if they have any impact on existing integration flows.

Let's now take that requirement to another level of complexity. Multiple FDMEE interfaces loading different sets of data for the same HFM sub-cube (Entity, Scenario, Year, and Value). For example: 
  • ERP data / Supplemental data
  • Statutory accounts / IFRS16 accounts
The two examples above have something in common. If you execute the Data Load Rules (same category) with Replace export mode, each DLR will delete the data of the other one. This is how the Replace load method works in HFM so data for the sub-cube is deleted before the new data set is loaded. For example:
  1. DLR GL_DATA loads actual data in Replace mode for Entity NY, Period Mar-2019
  2. DLR IFRS16_DATA loads actual data in Replace mode for Entity NY, Period Mar-2019
There can be many different sequences and scenarios but if we focus on the above one, the second execution will delete all the data previously loaded for NY/Mar/2019 (assuming there is no data protection mechanism).

Can I use HFM Data Protection functionality available in FDMEE?
I must admit that I have not been a big fan of Data Protection functionality. The main reason is the way it works, its limitations and the performance impact it can have. Basically, FDMEE (legacy FDM classic too) extracts the data to be protected, and then append to the DAT file that will be loaded.
You can protect, for the sub-cube being loaded, all data being that has specific member name in any of the dimensions of the data intersections, or just all data which has not that member.
Some of the limitations I refer to... you cannot protect multiple dimension members OOTB. Also, protecting data with operator "<>" can derive in FDMEE extracting big volumes of data. Definitely, something to be taken into account.

Therefore, yes, data protection is an option to be evaluated but, in addition to understanding how it works, you need to take it into account in your HFM application design. If you want to protect manual data inputs or different data sets loaded through FDMEE, you may want to consider using an HFM custom dimension (typically Custom4), so the different data sets use different custom dimension members. Each FDMEE interface should protect all data being different than the custom protection member it is loading into. As you can imaging,simply extracting the HFM data requires time and resources. Also,several re-loads can happen depending on our solution design.
BTW, I haven't mentioned Cell Texts but they are included in the data protection process.

What about Merging data instead of Replacing?
If you are thinking that changing your load method can be the solution for your data protection issue, you may be introducing more issues. As we usually say, the cure is worse than the disease. E.g. if you have re-allocations in your data, you might be protecting your data but leaving wrong data in the system as well. There is a well-known custom solution called Smart-Merge but it is not in the scope of this post, so I leave that for your researching.

Replace by Security Load Method and Global Application Users
Let's have a look to a load method different than Merge and Replace. The HFM documentation defines Replace by Security as:
In a nutshell, write and clear the data cells that you access to. This seems to be a good way of protecting data, doesn't it? As you know, FDMEE leverages HFM load methods so Replace by Security is available when exporting data:
So if the user loading data has specific security classes assigned, data being secured will be protected as it will not be cleared. Good, we are on the right path.

Now let's go with users. Let's say the user loading into HFM is an admin user. Restricting access for the admin might no be a good idea. The same applies for non-admin users, assigning security classes for data protection requirements only, might conflict with his role.

What about having specific Shared Services native users for data protection only? That would fit but we don't want everyone using the same user in FDMEE as that is not compatible with other FDMEE featured such us Security by Location.
Luckily, FDMEE has something called Global Application Users. This feature is not new as it was already available in legacy FDM Classic as "Global Logon" option in the target adapter.

In FDMEE, this option is available in the Target Application Options:
Therefore, if we join the two concepts we have global users loading data in Replace by Security mode. Still on the right path. However, this option is only available at target application level. We cannot overwrite it at Data Load Rule level as we can do with other target options:
One target application can only have one global user assigned. This is not good if we want to have multiple global users for multiple data flows.
You may come up with the option to register target applications multiple times which is possible from FDMEE 220. In my opinion, that's not a good idea as most of the FDMEE artifacts are defined at target application level and you would end up duplicating them as well. 

This post makes sense when we must find a solution to work around this limitation :-)

Multiple Global Application Users at Data Load Rule level
If we cannot have global users at DLR level OOTB, how can we get them?
Global Users, as many other options, are target application options. Therefore, as we already showed in other blog posts, they are stored at run-time in table AIF_BAL_RULE_LOAD_PARAMS when the DLR is executed. The technical name for the global user option is GLOBAL_USER_FOR_APP_ACCESS:
Then, this little hack is easy. We can update the table at run-time based on the logic we define to get the right global user. The script below inserts the global application user for the current DLR only if it does not exist already (this is done in case you want to place the script in different event scripts as shown in section At which step do we update the global user?)
As you can see, the SQL statement above includes an INSERT/UPDATE combination. Why? Because if you define a global user at Target Application level, there will be an existing line in the table, so we just have to update it. If you don't define it, we just have to insert it. It also avoids issues if you want to use global user in the Validate step to run the intersection check report, and you try to insert it again in the load step (as long as you execute the DLR in one go)

The code I show is valid for Oracle Database but it is similar for SQL Server. You might consider MERGE statement as well. Whatever you implement, the result must be the same :-)

At which step do we update the global user?
The key question is when do we update the global user in our process? The answer depends on your specific requirements, but the table below summarizes the main scenarios:
How do we configure the global users?
It depends on the requirements. For our example, if we need two global users:
  • User for GL data: IntegrationUser_INP_FDM
  • User for IFRS16 data: IntegrationUser_IFRS16_FDM
We have to perform the following steps:
  1. Create the security classes in HFM: C4_INP_FDM and C4_IFRS16_FDM
  2. Activate the security class for Custom4 (App settings in HFM metadata)
  3. Assign the security classes to the Custom4 members: INP_FDM and IFRS16_FDM
  4. Assign role Default to the global user so he shows up in the matrix below
  5. Assign user access to the security classes
Let's see an end-to-end example
We will first show the data protection issue:
  1. Load GL data first in Replace mode
  2. Input data manually
  3. Load IFRS16 data in Replace mode.
To show how the solution works, we will repeat the process but loading IFRS16 data in Replace by Security mode with new global user IntegrationUser_IFRS16_FDM.
  1. As a starting point, we load the GL data with Replace Mode. Data is loaded to Custom4 member INP_FDM

  2. The HFM data form shows the right data being loaded:
  3. After loading GL data, we input value 500 manually (Custom4 member INP):
  4. Finally, we load IFRS data in Replace mode:
As you can see, all data previously loaded has been wiped out after loading IFRS16 data in Replace mode. Therefore, both manual and GL data have not been protected.
Now we have a data protection issue and a solution for it. Once we apply our event script with the code to update the global user at run-time, data looks good after loading in Replace by Security mode. The three values have been protected as expected:
Conclusion
As usually happens for any custom solution, you need to take into account different considerations:
  • If you run the data load rule manually, you can select Replace mode as there is no functionality to hide load methods. To avoid that issue, you can always have a BefLoad script which checks the export mode before loading data. 
  • As you are loading with the global user, that's the user you see in the HFM Audit.
  • The solution applies to many different requirements. For example, we had a customer with different security requirements at HFM scenario level. In that solution, we used global users at category level.
  • Consider location security. If user A has access to location A, note that the HFM security applied will be the one for the global user and not for user A. Therefore, if user A tries to load data for an entity that he has not access in HFM but the global user has, then data will be loaded for that entity. As a good practice, I usually build a control when locations/data load rules are defined for unique entities. In this way, I prevent users loading data for other entities, especially when they can manipulate the source files :-)
Today, I tried to show you a new solution for data protection that leverages global users, HFM security classes and Replace by Security mode.

I hope you found it useful. Now it is turn for your creativeness.

That's all my folks.