Friday, December 23, 2016

FDMEE (PSU210), diving into the patch! - Part 1

Time to install the new patch!

Installing the Patch
First thing you have to do is read the Readme.html file :-)
This patch has some pre-requisites which I have to go through. After seeing installation steps, it looks simpler than previous PSUs.
I'm not going to cover how to apply the patch, just make sure you get Opatch succeeded message:
I don't really understand why there is an additional step to copy batch template file for new feature to execute scripts from command line. I was expecting to be automatically when applying the patch:
 In my case, my environment is single server so no need to copy any jar files. As per Oracle's readme:

If your deployment is a distributed environment where Hyperion Financial Management and Planning are on different servers than FDMEE, copy the following jar files:
Copy files from the Hyperion Financial Management server on EPM_ORACLE_HOME\common\hfm\\lib to the same directory on the FDMEE server.
Copy files from the Planning server on the EPM_ORACLE_HOME\common\planning\\lib to same directory on the FDMEE server.

You can check versions in Workspace after patching:
In my case, I had to patch HFM and Planning as well.

Action Scripts for HFM
Actions scripts are used to integrate with HFM and they are located in the FDMEE Server:
The three above has been modified in PSU210:
Why this is important?
Sometimes you have to adjust these action script files. This is a valid option but you need to take into consideration that changes applied are undone when the patch overwrites the files. Therefore, if you have changed any of the three scripts, you will have to re-apply changes after patching.
Just minor changes to fix bug related to loading journal descriptions.
Just minor change to fix but related to order of records exported in the DAT file.
This change has been done to correctly cast parameter options as boolean. In previous version, Calculate and Consolidate were not executed correctly.
Import Formats
New import import format options:
Please bear in mind that this is currently supported for File type only. Other sources like Universal Data Adapter don't support it yet.
They have also added a search icon for Source and Target so we don't have to select both Source  and Target types. This is a good enhancement but it has one drawback. They did not include a column to see the application type:
You can also see new options for new features such as skip row for delimited, using period/years in row, Tilde as file delimiter:
We can now load Text data to planning. FDMEE used to have column AMOUNT and AMOUNTX to load numeric data so new columns have been added to support Text, Date, and Smart Lists.
Consequently, some SQL Views have been also adjusted. Take it into consideration if you adjusted any VIEW for customization purposes (Ex: AIF_HS_BALANCES)

SAP HANA integration is now supported in the Universal Data Adapter:
Important: there is no documentation yet, neither in the patch readme so I will perform actions below on my own risk.

I haven't found any SAP HANA JDBC Driver (supposed to be ngdbc.jar). I find this obvious as I don't think Oracle is allowed to deliver SAP drivers. If you are a SAP customer or has access to SAP market, you will be able to download it.

As part of the configuration, some ODI files need to be imported into both Master and Work repositories.

SAP HANA (Master Repository)
We first need to import the new technology for SAP HANA (I have used INSERT_UPDATE Mode but you can use INSERT as well as this a new object):
This also imports the Logical Schema for UDA:
Having a look at the Import report, I can deduce this a copy of an Oracle technology :-)
After the technology has been imported, we need to import the Data Server and Physical Schema for HANA:
You should get a report like this:
If everything went OK, you will see new objects created in Topology:
SAP HANA (Work Repository)
Next steps are to import UDA Models Folder and UDA Project which in their new version have SAP HANA models and projects respectively.

Important: these two imports should be performed in INSERT mode. Otherwise, existing UDA configuration will be deleted. I will assume that there were no changes for other UDA Data Sources in PSU210, so I don't need to update any model or project existing. 

For example, this is my Model structure before applying the patch:
As you can see, I have UDA already configured for Oracle and MSSQL.
If I import new model folder in INSERT_UPDATE mode:
Then existing objects will be deleted:
Import must be done in INSERT mode if you don't want this to happen:
In that way, only new SAP_HANA model/sub-model will be imported:
And both new and existing Models will be available:
Same applies to UDA Project, we only want to import the new project for HANA: 
The new HANA project is now available in the Project list: 
Some bugs fixed
It's not all new features in a PSU. We need to fix some bugs as well :-)
Duplicate Mappings
It seems that now get errors when adding duplicate mappings:
Even if we try to import mappings from files:
 We get an error pop-up window:

Update 9/1/2017: it seems that some people are able to have duplicate mappings after applying the patch. Indeed I could create now duplicate maps. The reason is that validation of duplicate maps does not work if there is an existing mapping already saved in the system. So if you add two new maps without saving the first one before adding the 2nd one, it will work. If you add new map, save it, and then add the duplicate one, the two are now added... 
In other words, it seems that is not validating against existing mappings but just new ones. Probably a bug. Let's wait until Oracle's feedback.

Update 10/1/2017: a bug has been filed for this issue. Hopefully a PSE will be released soon.

Update 13/1/2017: it seems that bug is related to MSSQL only. Oracle database should work fine

I will leave it here for today. I hope you have now a good idea of what you get with new patch.

Merry Xmas!

Thursday, December 22, 2016

FDMEE (PSU210), at first glance

Hi folks,

Oracle just released FDMEE, PSU210 for friends. Lot of people have been waiting impatiently for this patch, and as usually, Oracle already gave us our Xmas present :-)
I haven't tested it yet but I wanted to give you a first overview of what you will get.

If you are already familiar with Data Management for Cloud (so FDMEE in the Cloud), you won't be so surprised as most of them (if not all) were already released in latest Cloud Services updates (at least in Test)

Patch number is 23041471 and you can download it from Oracle Support.
Get ready to Patch
Is your EPM system updated with latest patches?
There are couple of pre-requisites patches before your apply PSU210:
  • HFM (I have installed
  • Hyperion Planning (I have installed
If you are using FDMEE to integrate with DRM, you need to install DRM patch.

New Features
You may consider some fixes as new features but according to the Readme, these are the new features coming with PSU210:
More options and flexibility to Import Data
  • We can now load text data, dates, and smart lists to Planning (On-prem and Cloud). As feature described in next bullet point, you can load a file with multiple balance columns. So finally available in FDMEE! What about extracting them? I would be surprise if we could do it, probably in future PSU
  • Load numeric data for multiple dimension members of selected dimension in a single row of data. Forget about unpivoting your data :-) You might be familiar with this feature if you used ODI Planning KMs before.
  • Load data with period and years in rows so we can now import a file having data for multiple years/periods (do not confuse with Multi-Period files as we were used to. They have period/year in columns). Basically, imagine that you have a file with multiple period/years in rows and you want to filter them when importing data. TBH, after some analysis of this new feature, I'm not 100% happy of how it performs. I was expecting being able to process all periods through a single POV. This new feature imports data for each period in its own POV period. However, I need to test if we can now create source period mappings having File as a source.
  • Support Skip rows for Delimited Files. This is not actually a "new feature" as it was already possible in FDM Classic.
  • Use ~ as file delimiter. Was this in FDM Classic? Yes, it was.

If you want to see more details, my colleague John already posted great entries about some of them:
More Hybrid Integrations
  • We can use FDMEE on-premise to integrate our on-premise or cloud systems with FCCS (Financial Consolidation and Close Cloud Service) and Fusion GL Cloud. These two new systems can be either source or target in our integrations. However, I can't confirm if we can write-back data to Fusion GL Cloud yet.
  • We can now select the delimiter for mapping export files (available as a setting). IMHO, I can think in some nice enhancements for mappings rather than this one.
Universal Data Adapter (UDA)
  • SAP HANA Based DW (SAP  BW Powered by SAP or simply SAP BW on HANA) is now supported in the UDA. I guess this is using the SAP JDBC driver to connect to the SAP HANA Database (In Memory Database of SAP). So now you know that if your source is HANA DB, you can forget about SAP Adapter headaches :-)
  • As you may know, SAP S/4HANA Finance (formerly SAP Simple Finance) is the next generation of SAP ERP (ECC and R/3) and it sits on top of HANA DB. We don't have Simple Finance adapter yet (as a pre-built integration) but with this new data source we can access data in HANA DB which is a very good feature. 
  • Just in case you read about it, SAP recently announced a new product called SAP BW/4HANA. This is the next generation of SAP Business Warehouse powered by SAP HANA.
  • So to me, it does not really make sense that they say SAP HANA Based DW is supported in the UDA. Maybe is more correct to say SAP HANA DB instead. I say that because if it's using JDBC driver for HANA DB, then you can access database layer regardless the application layer (S/4HANA, BW/4HANA, etc.) Maybe I'm wrong but it makes more sense to me.
Other Features
  • Limit the number of parallel processes submitted by a batch. So you can now avoid your FDMEE server exploding :-)
  • Execute Custom Scripts from Command Line... uhuuuu!
  • New setting to allow unlock by location. By enabling it, the user will be allowed to unlock a POV by location on the POV selection screen
Hopefully we get rid of refreshing and filtering issues in Data Load Workbench. That would be highly appreciated. I can't confirm at the moment but will keep you posted.

BTW, new admin guide is not available yet. Hopefully next week.

That's all for today. I will post what I find interesting once I start playing with the new features.

Download, install, and enjoy!

Monday, December 5, 2016

Code Snippet: looping a ResultSet returned by API function

FDMEE provides a set of API functions which return a ResultSet object (getLocationDetails, getBatchDetails, getCategoryList, etc.). These functions are executing a SQL query in the FDMEE database behind the scenes.
Besides, FDMEE API function executeQuery can be used to execute our own SQL queries in the FDMEE database.

The following snippet shows how to loop a ResultSet object, get column values returned, and log details into the FDMEE process log. In my example. I've used a simple query to get internal id of Point-of-View components (Location, Category and Period) for current process id (load id)

How does it work?

Let's have a look!

Looping a ResultSet object returned by FDMEE API Function

 Snippet:       Loop records in ResultSet object (executeQuery)
 Author:        Francisco Amores
 Date:          12/06/2016
 Notes:         This snippet can be pasted in any event/custom script.
                Function executeQuery executes SQL queries in the
                FDMEE database
 Instructions:  Set log level (global or application settings) to 5 
 Hints:         Use this snippet to execute a SQL query with API
                function executeQuery and loop all records in the 
                ResultSet object returned by API.
                This snippet can be also used with other API functions
                returning ResultSet objects
 Example:       Get distinct keys for POV components
 FDMEE Version: and later

# Import section
import java.sql.SQLException as SQLException
import os # Optional

# SQL Query (Example -> get distinct partitionkey, catkey, and periodkey for specific load id)
sqlQuery = """
            FROM TDATASEG
            WHERE LOADID = ?"""

# List of parameters
# Emtpy list ([]) if no parameters (?) are required -> params = []
loadId = fdmContext["LOADID"]            
params = [loadId]             

# Debug query
fdmAPI.logDebug("SQL Query (params):%s %s (%s)" % (os.linesep, sqlQuery, params))

    # Execute query (returns ResultSet)
    # You can also use any API function returning a ResultSet
    rs = fdmAPI.executeQuery(sqlQuery, params)
    # Loop records if resulset has data
    if rs.isBeforeFirst():
            # Get column values
            partKey = rs.getString("PARTITIONKEY")
            catKey = rs.getString("CATKEY")
            periodKey = rs.getString("PERIODKEY")
            # Write to log (Optional)
            fdmAPI.logDebug("POV Keys: %s, %s, %s" % (partKey, catKey, periodKey))
            # Code executed for each record
            # ...  
        # No records
        fdmAPI.logDebug("No records returned!")    
    # Close ResultSet
except (SQLException, Exception), ex:
    # Error message
    errMsg = "Exception: %s" % ex
    # Optionally raise RunTimeError to stop the process
    # raise RunTimeError(errMsg)

Code snippets for FDMEE can be downloaded from GitHub.