Friday, December 23, 2016

FDMEE 11.1.2.4.210 (PSU210), diving into the patch! - Part 1

Time to install the new patch!

Installing the Patch
First thing you have to do is read the Readme.html file :-)
This patch has some pre-requisites which I have to go through. After seeing installation steps, it looks simpler than previous PSUs.
I'm not going to cover how to apply the patch, just make sure you get Opatch succeeded message:
I don't really understand why there is an additional step to copy batch template file for new feature to execute scripts from command line. I was expecting to be automatically when applying the patch:
 In my case, my environment is single server so no need to copy any jar files. As per Oracle's readme:

If your deployment is a distributed environment where Hyperion Financial Management and Planning are on different servers than FDMEE, copy the following jar files:
Copy files from the Hyperion Financial Management server on EPM_ORACLE_HOME\common\hfm\11.1.2.0\lib to the same directory on the FDMEE server.
Copy files from the Planning server on the EPM_ORACLE_HOME\common\planning\11.1.2.0\lib to same directory on the FDMEE server.

You can check versions in Workspace after patching:
In my case, I had to patch HFM and Planning as well.

Action Scripts for HFM
Actions scripts are used to integrate with HFM and they are located in the FDMEE Server:
The three above has been modified in PSU210:
Why this is important?
Sometimes you have to adjust these action script files. This is a valid option but you need to take into consideration that changes applied are undone when the patch overwrites the files. Therefore, if you have changed any of the three scripts, you will have to re-apply changes after patching.

HFM_LOAD.py
Just minor changes to fix bug related to loading journal descriptions.

HFM_EXPORT.py
Just minor change to fix but related to order of records exported in the DAT file.

HFM_CONSOLIDATE.py
This change has been done to correctly cast parameter options as boolean. In previous version, Calculate and Consolidate were not executed correctly.
Import Formats
New import types...new import format options:
Please bear in mind that this is currently supported for File type only. Other sources like Universal Data Adapter don't support it yet.
They have also added a search icon for Source and Target so we don't have to select both Source  and Target types. This is a good enhancement but it has one drawback. They did not include a column to see the application type:
You can also see new options for new features such as skip row for delimited, using period/years in row, Tilde as file delimiter:
TDATASEG and TDATASEG_T Tables
We can now load Text data to planning. FDMEE used to have column AMOUNT and AMOUNTX to load numeric data so new columns have been added to support Text, Date, and Smart Lists.
Consequently, some SQL Views have been also adjusted. Take it into consideration if you adjusted any VIEW for customization purposes (Ex: AIF_HS_BALANCES)

SAP HANA
SAP HANA integration is now supported in the Universal Data Adapter:
Important: there is no documentation yet, neither in the patch readme so I will perform actions below on my own risk.

SAP JDBC Driver
I haven't found any SAP HANA JDBC Driver (supposed to be ngdbc.jar). I find this obvious as I don't think Oracle is allowed to deliver SAP drivers. If you are a SAP customer or has access to SAP market, you will be able to download it.

As part of the configuration, some ODI files need to be imported into both Master and Work repositories.

SAP HANA (Master Repository)
We first need to import the new technology for SAP HANA (I have used INSERT_UPDATE Mode but you can use INSERT as well as this a new object):
This also imports the Logical Schema for UDA:
Having a look at the Import report, I can deduce this a copy of an Oracle technology :-)
After the technology has been imported, we need to import the Data Server and Physical Schema for HANA:
You should get a report like this:
If everything went OK, you will see new objects created in Topology:
SAP HANA (Work Repository)
Next steps are to import UDA Models Folder and UDA Project which in their new version have SAP HANA models and projects respectively.

Important: these two imports should be performed in INSERT mode. Otherwise, existing UDA configuration will be deleted. I will assume that there were no changes for other UDA Data Sources in PSU210, so I don't need to update any model or project existing. 

For example, this is my Model structure before applying the patch:
As you can see, I have UDA already configured for Oracle and MSSQL.
If I import new model folder in INSERT_UPDATE mode:
Then existing objects will be deleted:
Import must be done in INSERT mode if you don't want this to happen:
In that way, only new SAP_HANA model/sub-model will be imported:
And both new and existing Models will be available:
Same applies to UDA Project, we only want to import the new project for HANA: 
The new HANA project is now available in the Project list: 
Some bugs fixed
It's not all new features in a PSU. We need to fix some bugs as well :-)
Duplicate Mappings
It seems that now get errors when adding duplicate mappings:
Even if we try to import mappings from files:
 We get an error pop-up window:

Update 9/1/2017: it seems that some people are able to have duplicate mappings after applying the patch. Indeed I could create now duplicate maps. The reason is that validation of duplicate maps does not work if there is an existing mapping already saved in the system. So if you add two new maps without saving the first one before adding the 2nd one, it will work. If you add new map, save it, and then add the duplicate one, the two are now added... 
In other words, it seems that is not validating against existing mappings but just new ones. Probably a bug. Let's wait until Oracle's feedback.

Update 10/1/2017: a bug has been filed for this issue. Hopefully a PSE will be released soon.

Update 13/1/2017: it seems that bug is related to MSSQL only. Oracle database should work fine

I will leave it here for today. I hope you have now a good idea of what you get with new patch.

Merry Xmas!

Thursday, December 22, 2016

FDMEE 11.1.2.4.210 (PSU210), at first glance

Hi folks,

Oracle just released FDMEE 11.1.2.4.210, PSU210 for friends. Lot of people have been waiting impatiently for this patch, and as usually, Oracle already gave us our Xmas present :-)
I haven't tested it yet but I wanted to give you a first overview of what you will get.

If you are already familiar with Data Management for Cloud (so FDMEE in the Cloud), you won't be so surprised as most of them (if not all) were already released in latest Cloud Services updates (at least in Test)

Patch number is 23041471 and you can download it from Oracle Support.
Get ready to Patch
Is your EPM system updated with latest patches?
There are couple of pre-requisites patches before your apply PSU210:
  • HFM 11.1.2.4.200+ (I have installed 11.1.2.4.203)
  • Hyperion Planning 11.1.2.4.003+ (I have installed 11.1.2.4.005)
If you are using FDMEE to integrate with DRM, you need to install DRM 11.1.2.4.340 patch.

New Features
You may consider some fixes as new features but according to the Readme, these are the new features coming with PSU210:
More options and flexibility to Import Data
  • We can now load text data, dates, and smart lists to Planning (On-prem and Cloud). As feature described in next bullet point, you can load a file with multiple balance columns. So finally available in FDMEE! What about extracting them? I would be surprise if we could do it, probably in future PSU
  • Load numeric data for multiple dimension members of selected dimension in a single row of data. Forget about unpivoting your data :-) You might be familiar with this feature if you used ODI Planning KMs before.
  • Load data with period and years in rows so we can now import a file having data for multiple years/periods (do not confuse with Multi-Period files as we were used to. They have period/year in columns). Basically, imagine that you have a file with multiple period/years in rows and you want to filter them when importing data. TBH, after some analysis of this new feature, I'm not 100% happy of how it performs. I was expecting being able to process all periods through a single POV. This new feature imports data for each period in its own POV period. However, I need to test if we can now create source period mappings having File as a source.
  • Support Skip rows for Delimited Files. This is not actually a "new feature" as it was already possible in FDM Classic.
  • Use ~ as file delimiter. Was this in FDM Classic? Yes, it was.

If you want to see more details, my colleague John already posted great entries about some of them:
More Hybrid Integrations
  • We can use FDMEE on-premise to integrate our on-premise or cloud systems with FCCS (Financial Consolidation and Close Cloud Service) and Fusion GL Cloud. These two new systems can be either source or target in our integrations. However, I can't confirm if we can write-back data to Fusion GL Cloud yet.
Mappings
  • We can now select the delimiter for mapping export files (available as a setting). IMHO, I can think in some nice enhancements for mappings rather than this one.
Universal Data Adapter (UDA)
  • SAP HANA Based DW (SAP  BW Powered by SAP or simply SAP BW on HANA) is now supported in the UDA. I guess this is using the SAP JDBC driver to connect to the SAP HANA Database (In Memory Database of SAP). So now you know that if your source is HANA DB, you can forget about SAP Adapter headaches :-)
  • As you may know, SAP S/4HANA Finance (formerly SAP Simple Finance) is the next generation of SAP ERP (ECC and R/3) and it sits on top of HANA DB. We don't have Simple Finance adapter yet (as a pre-built integration) but with this new data source we can access data in HANA DB which is a very good feature. 
  • Just in case you read about it, SAP recently announced a new product called SAP BW/4HANA. This is the next generation of SAP Business Warehouse powered by SAP HANA.
  • So to me, it does not really make sense that they say SAP HANA Based DW is supported in the UDA. Maybe is more correct to say SAP HANA DB instead. I say that because if it's using JDBC driver for HANA DB, then you can access database layer regardless the application layer (S/4HANA, BW/4HANA, etc.) Maybe I'm wrong but it makes more sense to me.
Other Features
  • Limit the number of parallel processes submitted by a batch. So you can now avoid your FDMEE server exploding :-)
  • Execute Custom Scripts from Command Line... uhuuuu!
  • New setting to allow unlock by location. By enabling it, the user will be allowed to unlock a POV by location on the POV selection screen
Hopefully we get rid of refreshing and filtering issues in Data Load Workbench. That would be highly appreciated. I can't confirm at the moment but will keep you posted.

BTW, new admin guide is not available yet. Hopefully next week.

That's all for today. I will post what I find interesting once I start playing with the new features.

Download, install, and enjoy!

Monday, December 5, 2016

Code Snippet: looping a ResultSet returned by API function

FDMEE provides a set of API functions which return a ResultSet object (getLocationDetails, getBatchDetails, getCategoryList, etc.). These functions are executing a SQL query in the FDMEE database behind the scenes.
Besides, FDMEE API function executeQuery can be used to execute our own SQL queries in the FDMEE database.

The following snippet shows how to loop a ResultSet object, get column values returned, and log details into the FDMEE process log. In my example. I've used a simple query to get internal id of Point-of-View components (Location, Category and Period) for current process id (load id)

How does it work?

Let's have a look!

Looping a ResultSet object returned by FDMEE API Function


'''
 Snippet:       Loop records in ResultSet object (executeQuery)
 Author:        Francisco Amores
 Date:          12/06/2016
 Blog:          http://fishingwithfdmee.blogspot.com
 
 Notes:         This snippet can be pasted in any event/custom script.
                
                Function executeQuery executes SQL queries in the
                FDMEE database
                
 Instructions:  Set log level (global or application settings) to 5 
 Hints:         Use this snippet to execute a SQL query with API
                function executeQuery and loop all records in the 
                ResultSet object returned by API.
                
                This snippet can be also used with other API functions
                returning ResultSet objects
                
 Example:       Get distinct keys for POV components
               
 FDMEE Version: 11.1.2.3 and later
 ----------------------------------------------------------------------
 Change:
 Author:
 Date:
'''

# Import section
import java.sql.SQLException as SQLException
import os # Optional

# SQL Query (Example -> get distinct partitionkey, catkey, and periodkey for specific load id)
sqlQuery = """
            SELECT                
            DISTINCT PARTITIONKEY,
            CATKEY,
            PERIODKEY
            FROM TDATASEG
            WHERE LOADID = ?"""

# List of parameters
# Emtpy list ([]) if no parameters (?) are required -> params = []
loadId = fdmContext["LOADID"]            
params = [loadId]             

# Debug query
fdmAPI.logDebug("SQL Query (params):%s %s (%s)" % (os.linesep, sqlQuery, params))

try:
    # Execute query (returns ResultSet)
    # You can also use any API function returning a ResultSet
    rs = fdmAPI.executeQuery(sqlQuery, params)
    
    # Loop records if resulset has data
    if rs.isBeforeFirst():
        while rs.next():
            # Get column values
            partKey = rs.getString("PARTITIONKEY")
            catKey = rs.getString("CATKEY")
            periodKey = rs.getString("PERIODKEY")
            
            # Write to log (Optional)
            fdmAPI.logDebug("POV Keys: %s, %s, %s" % (partKey, catKey, periodKey))
            
            # Code executed for each record
            # ...  
    else:
        # No records
        fdmAPI.logDebug("No records returned!")    
    
    # Close ResultSet
    fdmAPI.closeResultSet(rs)
    
except (SQLException, Exception), ex:
    # Error message
    errMsg = "Exception: %s" % ex
    fdmAPI.logFatal(errMsg)
    # Optionally raise RunTimeError to stop the process
    # raise RunTimeError(errMsg)

Code snippets for FDMEE can be downloaded from GitHub.

Monday, November 28, 2016

Open Batch, putting some light on it!

Hi,
after some discussions with different customers I got feedback about Open Batch documentation. It's true that is not clear and it omits some details. However, I also got feedback from Oracle about this being updated soon.

In the meantime I think is a good idea to put some light on it, especially on Import and Export modes available for batch files. I will try to keep things simple and will focus in the common scenarios.

Please bear in mind that this post assumes you are familiar with Open Batch concept. In case you don't, you can visit Oracle's documentation in this link.

Basically, an open batch is a type of batch definition which lets you automate/schedule workflow processing (So Import, Validate, Export, Load and Check). This functionality is typically used when source files are generated from external systems like SAP. If you come from FDM Classic World, this is the replacement of Batch Loader.

Open Batch file names
Open batch uses files which have the following information in their names:
  • POV: Point-of-View where data is processed (Location, Period, etc.)
  • Import Mode: how data is imported into FDMEE's POV
  • Export Mode: how data is exported into the target system (HFM, Planning, etc.)
They need to be located below openbatch folder (sub-folders can be also used):
Being said that, let's see how all fit in the file name. There are two naming conventions for open batch files:
  • Data Load Rule auto-creation: used when the DLR does not exist yet
  • Existing Data Load Rule: used when the DLR already exists in FDMEE
Data Load Rule auto-creation
This naming convention is used when we want to create the Data Load Rule on the fly.
Assuming delimiter is set to @ (valid delimiters are "@", "_", "~" and ";"):
      FreeText@Location@Category@Period@IE.ext
where:
  • FreeText is a free text which is also used to define the order of execution for multiple batch files.
  • Location is the POV Location
  • Category is the POV Category
  • Period is the POV Period
  • I is the Import Mode into FDMEE
  • E is the Export Mode into Target
  • ext is the file extension (typically txt or csv)
To use this approach you need to enable Auto Create Data Rule option in your batch definition:
In this case, we are using a specific sub-folder within openbatch folder: BATCH_FOLDER (this is very useful when you have multiple source systems and you want to have individual batches for them)
Let's say we have the following open batch file:
As a result of running the open batch, a new DLR has been created (LocName_Category):
Finally, you can check the FDMEE process log for details:
Questions:
  1. What happens if we run the same batch file again? Nothing, FDMEE just reuses it.
  2. What happens if we disable Auto Create DLR because rule is already created? sames as above

Existing Data Load Rule
This naming convention is the most common one. Open batch assumes the DLR already exists in FDMEE so there is no need to create it.
Assuming delimiter is set to @:
      FreeText@DLR@Period@IE.ext
where:
  • FreeText is a free text which is also used to define the order of execution for multiple batch files
  • DLR is the POV Data Load Rule
  • Period is the POV Period
  • I is the Import Mode into FDMEE
  • E is the Export Mode into Target
  • ext is the file extension (typically txt or csv)
Why don't we need Location and Category? They are implicitly defined in the DLR.

Import and Export Modes
I tried to summarize the most common source/target import and export modes in the following tables:
When you run an open batch, you can see import and export modes in the process log:
Process Level
We can configure the workflow steps we want the batch to execute:
As you can see, there is no Import/Validate split any more. The reason is that, as you may know already, if we import data, then it is automatically validated.

Open Batch for Multi-Period
We can also use open batches when working with multi-period files (Excel or Text)
The naming convention for batch files will be:
FreeText@DLR@Start Period@End Period@IE.ext

I will cover multi-period batches in future posts.

Free Text, be creative!
Free text field is commonly used to define sequence when batch files are processed sequentially. However, it is also a good place where to have some useful information for our batch custom actions.

Let's say that we have batch files generated from external system like SAP. Our batch process has been customized to send email notifications with results after each batch file is processed.
To who? to SAP users who triggered the file generation.
How do I know their email address? we can have the user name in the free text field. Then, our custom process can easily connect to the LDAP Server in order to get the email address based on user name.

For example, after processing file:
    FAmores@DLR@Period@RR.txt
FDMEE will send an email notification to FAmores@mycompany.com which is stored in the active directory. Nice, isn't it?

Open Batch files for ERP or Source Adapters
Open batch files are commonly used for File to EPM interfaces. However, they can also be used with other sources (E-Biz, PSFT, JDE, SAP, etc.)
The only difference is that batch filename is empty as it is actually used only as "trigger".
For example, for Universal Data Adapter (UDA):
And the DLR is executed in the same way as having File as source:
Scheduling
FDMEE built-in scheduler has some limitations:
  • Can't be easily scheduled for high frequency (ex: every 5 minutes)
  • When executed, it creates a new process id (and therefore DB information, batch sub-folder ,etc.) even if there are no batch files
  • Can't be easily delete specific scheduling if you have several
  • We can't easily see which batch have been scheduled and when
Regarding the 2nd bullet point: in current version, nothing prevents you to execute an open batch if there are no batch files. As a result, you end with several batch entries in the process details page:

Luckily, there are different workarounds for limitations above. I will cover some of them in future posts.

Some Tips & Tricks
Based on experience, I think this information can be useful:
  • You can group multiple open batches in a master batch (type Batch) although all open batches must be configured for the same target application
  • I would not use parallel open batches in current version (11.1.2.4.200). There are some known issues
  • When customizing your batch process, bear in mind that batch files for same DLR, will overwrite data and some audit information in the FDMEE database. For example, if you have one DLR where multiple files for different entities are processed
  • Avoid using "_" as file delimiter. Reason is that underscore is commonly used in DLR names
  • Try to force source systems generating batch files to use batch file name convention. Otherwise, you can execute a custom script to rename files before batch is processed
  • Add email notification system to your batch process. This is not an out of the box functionality in FDMEE but Admins and end-users will appreciate it. Who doesn't want something like this?

That's all folks.

I hope you enjoyed reading this post and have a clearer idea about open batches. And remember, the best of way of learning is to use them :-)