Thursday, December 31, 2015

FDM to FDMEE Migration Utility (11.1.2.4) - Part 5

Hey folks!
Last day of the year, last post of the year.
This is the 5th part of my Migration Utility series. Is it the last one? I don't think so but it depends how much I use the utility :-)
Today I will reveal why no artifacts were actually migrated. Before we go the the matter I would like to summarize what we have been discussing so far:
  • Part 1: Introduction to the utility and general considerations
  • Part 2: Installation and ODI configuration
  • Part 3: Migrating an HFM application. A real case study
  • Part 4: Going through initial ODI steps. Creation of the target HFM application in FDMEE
Update [4/1/2016]
Oracle has released new version of the migration utility. All details are in patch Patch 22659123

One of the first stoppers we found was that even if the target HFM application was created and configured in FDMEE, no artifacts such as import formats or locations were created. Why? it took me a while to find the root cause and then confirm with Oracle that we were facing a bug. This is supposed to be fixed in the next patch for the migration utility. In the meantime you can apply the workaround we will use.

Last step we detailed in part 4 was the configuration of target application dimensions and options. Before we explore the issue with artifacts I would like to briefly go through 3 configuration steps:

Create Source System Adapter
This step creates a source system for the Open Interface Adapter. This will be used for historical data migration in case we proceed.
We can see the new source system in FDMEE:
Extract Categories and Periods
Categories are created based on what is stored in FDM tables TPOVCATEGORY and TPOVCATEGORYADAPTOR.
It's important to remark that application category mappings will be only migrated if target category in legacy FDM is set to a target value different than [EMPTY], so actually if the mapping is correctly defined: 
Periods work in a similar way although global and application periods are migrated in two steps rather than one. They are extracted from tables TPOVPERIOD and TPOVPERIODADAPTOR.
So after our categories and periods have been migrated, let's have a look in FDMEE:
Categories look fine: 3 out of 3!
In any case, few things you should know:

  • If the category exists, it will not be created. In my case Actual already existed so it has not been created. Budget and Forecast are new categories
  • Category keys (CATKEY) will not probably match. New categories are created using the CATKEY sequence of FDMEE.

Regarding Periods, there were lots of global periods configured in FDM with target month and year as [Empty]. These periods are migrated as global periods although only periods with valid mappings at adapter level are migrated to FDMEE as periods at target application level.

In the image above we can see all global periods added (only new are added). You may want to setup your periods correctly before migrating. Otherwise you will get only few of them as target application period mappings:
And now, artifacts not migrated
If we have a look to the ODI scenario steps, next artifact to be migrated are the logic groups.
 We do have two logic groups in legacy FDM so we should have them also in FDMEE:
But no, we don't have them in FDMEE :-)
I already said in the first post of this series that this is a technical utility so any research will have to be done at technical level either ODI or Database.
Let's start then with the SQL query used by ODI to extract the logic groups: 
Before running the query it's good to have a look and see which filters are used. There is one which took my attention: WHERE parttype = 2
So let's see in table TPOVPARTITION which locations have partition type 2, well actually, what is that partition type property?
With a simple SQL query I can see that there are 4 locations having logic groups assigned and they have parttype = 1...
Ok, root cause found: logic groups are not extracted because the filter executed by ODI will exclude locations with partition type 1. But again, what is partition type? After some researching in legacy FDM we got the answer. 
Do you remember about Bulk Insert or SQL Insert options when defining locations in FDM? So there you have your answer. Column parttype stores the location load type. A load type "SQL Insert" is stored as 1 and a load type "Bulk Insert" is stored as 2. As FDMEE mostly uses Bulk inserts for better performance, this was missed in the migration utility...

Trying it again
So now that we think we know the issue, let's try it again!
First we need to update table TPOVPARTITION in the source database (or we change types one by one in the FDM application):

update [CMFDMDEV].[dbo].[tPOVPartition]
set parttype = 2

Then we need to delete the target application in FDMEE in order to avoid issues:
This will launch a purge process in FDMEE which will delete the target application in FDMEE and all related objects already migrated (periods, categories, etc.)
Time to execute the ODI scenario again:
It's 31st of December, we all should be enjoying our families and partying so let's show it:
Wow, groups created!
Any other artifacts? Import Format failed!!! ODI error messages tells me something about size...
ODI operator shows me that import formats were extracted to the staging table (C$...) but it failed when loading into FDMEE table TBHVIMPGROUP.
In deed, we can see them in the staging table:
I suspect that column IMPGROUPKEY is the guilty. If I check size in the staging I can see it's different than the size in target table but same as source table tBhvImpGroup in FDM:
So which is the issue then? it's obvious that the prefix is making the staging column bigger and therefore raising the error, especially import format AUSTRALIA_PHASED which length is 21 when including the prefix:
Let's test then with prefix "C_":
And now?
Here it is! All import formats have been migrated. However you will notice that FDM import scripts are not valid anymore. They were implemented in VB Script (uss extension). We now have Jython so time to re-write scripts :-) Here you have an advice...do you really need them? analyze your scripts and see if you can re-design your FDMEE in order to avoid them. They might be good candidates to be replaced by new out-of-the-box functionality.

Now that my objects have been migrated it's time to review them, perform some manual actions where needed and why not, migrating historical data :-)

2016 is here and new challenge are coming. 
I will be presenting and enjoying KSCOPE16 in Chicago and I hope to see you all there!
Have a happy year!

Sunday, December 20, 2015

FDMEE 11.1.2.4.100 released (PSU100) - Part 1

We finally got out Xmas present!!!

Planning, HFM, Essbase...all were patched, now it's time for FDMEE.

Folks, FDMEE 11.1.2.4.100 has been released. As usually this new PSU fixes several bugs and provides some interesting functionality. I'm trying to enjoy my Xmas holiday but I had to find some time to open my present before Santa brought it :-)

After playing with new PSU, I have to say this is a must so I encourage you to install it.

Patch overview: 20648390
At first glance this new PSU shows 6 new functionality (although I'm sure we will discover few more) and fixes about 40 bugs.
New functionality

  • Universal Adapter: provides direct integration to SQL tables or views with no need of populating the Open Interface Adapter table. We can now see supported data sources in the source system and source adapter details. As Open Interface Adapter, this new feature has been implemented using ODI (note that ODI model folder and projects are not automatically imported into the FDMEE repository so you have to import them with ODI Studio)
  • Testing Check Rule Expression: this feature was already available in legacy FDM.
  • Loading Exchange Rates to HFM: FDMEE can be used to load exchange rates into HFM. However when integrating with EBS or PSFT, rates were stored in the staging table for EX rates but not exported to HFM. Now you can have the end-to-end integration for EX rates.
  • Extracting HFM Journals: new parameters and filters for type of journal-
  • Purge Scripts: This is a really good and expected one. Purging run-time artifacts (process and staging tables, inbox/outbox...). Nice!
As you can see, the purge scripts are available as Custom Scripts (stored in EPM_ORACLE_HOME\products\FinancialDataQuality\bin\system)
  • Map Monitor Reportswe now have two new map monitor reports for Location and User. In order to avoid storing big amount of audit information, this feature is optional and can be enabled in System/Application Settings.
As always I will review and analyze each of the functionality and share my feedback with you. Before testing any there are couple of them which sounds quite interesting to me. Others just complete the parity with legacy FDM.

Bugs fixed

  • Unable to sort by location when selecting files from Inbox folder... quite important when managing big volume of locations :-)
  • Support case insensitive in the intersection check report for HFM... I still miss the check report in HTML format as FDM so I'm happy if Oracle fixes bugs but still waiting!
  • Status of process details updated successfully
  • Improve performance of HFM intersection check process... I hope this really implies bug performance as the new architecture using the HFM API had a really bad performance. If this has been fixed then it's time to enable this option back for my customers :-)
  • Accumulate within file...I have seen in several customers duplicate intersections in the export DAT file due to logic accounts having column JOURNALID as blank rather than NULL. 
  • SQL Server deadlocks...this has been a nightmare for customers using MSSQL.
  • Improve HFM load process when "Data Protection" is enabled... What does improve mean? I'm quite excited.
  • Add option to turn Account Descriptions on and off...Not sure when this applies but will find out soon
  • Overriding load method in DLR target options...  When configuring load method at DLR level, new method was not overriding the general method set at target application level.
  • Non-admin user should be allowed to see only their processes... Good to see that we can only see (a good pun) what we are processing
  • ...
Honestly some of the bugs listed can be also seen as new functionality added. I'm happy to see the product is getting better and better. I'm seeing users getting more satisfied and hopefully this is just the beginning.


Installation
Installing the patch shouldn't be an issue and follows the common Opatch procedure.
In this patch we are back to pre-requisites:
Oracle JDeveloper (ADF) Patch 21240419. Luckily you don't have to apply this one if you already installed HFM 11.1.2.4.100 or higher.
Installing the ADF pre-requisite
I don't have JDeveloper in my environment so I could easily follow the instructions in the readme file:
Installing FDMEE 11.1.2.4.100
Easier than previous PSUs in 11.1.2.3:
1) Stop the FDMEE Service
2) Run Opatch from EPM_ORACLE_HOME\Opatch
3) Start FDMEE Service
4) Restart the HTTP Server
Remember that some changes will happen during FDMEE server startup after applying the patch. Don't be panic about errors due to objects already existing.

New FDMEE 11.1.2.4.100 is now installed. It's time to enjoy and play with it.
Main changes in the Database
One of the first things I like to review is how the database is changed. As I always say, understanding the database provides you a better insight about the product itself. Main changes applied are stored in a SQL script placed at
EPM_ORACLE_HOME\OPatch\20648390\files\products\FinancialDataQuality\database\Common\Oracle (or MSSQLServer folder)
Let's have a quick look and see what we can get:
  • New columns for table AIF_ARTIFACT_AUDIT...I bet this is related to new Map Monitor Report...
  • New DRM columns for tables AIF_HS_DIM_MEMBERAIF_HS_DRM_LOADS, and AIF_HS_DRM_LOAD_HIERARCHIES. We already had DB tables for DRM integration but functionality was not available. Is it new functionality in this PSU? Oracle did not list it but it's definitely there 
  • Size for SRCKEY column (source value) is increased to 80 chars in all mapping tables (TDATAMAP, TDATAMAP_T, TDATAMAP_STG)
  • Column CURKEY allows now up to 25 chars. This issue was already addressed in old post.
  • In the same way, DATAVIEW allows 20 chars and JOURNALID 80 chars for TDATASEG and TDATASEG_T tables.
  • Indexes modified in both Oracle (Map table) and SQL Server (to solve issues with CHECK process I guess)

  • New Views to audit artifacts and DRM integration
  • New Query definitions for Map Monitor Reports. You can find them in aif_reports.sql file (EPM_ORACLE_HOME\OPatch\20648390\files\products\FinancialDataQuality\database\Common)

As you can see we will be able to get audit details filtered by actions performed...interesting.
This is all for today... Did you already install it? tested it?

Merry Xmas to everyone and enjoy!

Sunday, November 29, 2015

FDM to FDMEE Migration Utility (11.1.2.4) - Part 4

Hi there,

in last post we saw how the migration utility was showing "false" green ticks. My target application was created but no artifacts were migrated.

Today I will go through the ODI process so we can understand what is going on.

ODI variables
As most of the FDMEE scenarios, one of the first step is to refresh the variables that are going to be used in the process. In the migration utility we also have some variable which values may be key for troubleshooting and understand how all this work.

Variable: p_src_con_opr (Step 4)
This is the concatenation operator based on the JDBC driver we setup in the ODI topology for the source database hosting our legacy FDM application:
As my source database is SQL Server, my concatenation operator is plus sign '+'.
Same applies to variable p_tgt_con_opr which stores the concatenation operator for the target database hosting FDMEE.

Variable: p_prefix (Step 6)
This variable stores the value passed when executing the ODI scenario. In step 6 the value of this variable is trimmed to remove left white spaces. It's weird that no RTRIM is applied as that would also cause potential issues:

Variable: p_fdmee_application_name (Step 7)
This variable stores the target application name in FDMEE. For Essbase, it will create an application with name APP-DB. For Planning and HFM, application name will be the one passed during scenario execution:

Variable: p_file_source_system_id (Step 8)
This variable stores the source system id of our source system "File" in FDMEE.
As can see if takes the minimum Id. Why? You could have created different source systems with type "File" (although is not a common practice). Then the new import format migrated for source files will be configured with the original source system with type "File" that FDMEE has.
In case this is not what you want, you would have to change the Id of the source system you want to use in order to make it the minimum (probably by assigning value zero to it)

Variable: p_global_adapter_key (Step 9)
As you may know we could have multiple adapters in legacy FDM. One of them was set as the Global Adapter in the FDM application settings. This variable retrieve the FDM global adapter. This value will be used to migrate global periods and categories into FDMEE:

Variable: p_app_adapters_subquery (Step 10)
The migration utility uses a SQL sub-query in many different extract queries. This sub-query is stored in a variable:
In my FDM application, I have two adapters, an old version of the HFM adapter which was upgraded to a newer one FM11X-G6-C:

Variable: p_tdst_attr_column_type (Step 11)
In the case you use the HFM adapter for ICT module (inter-company transaction), this variable will store the attribute column types for FDMEE TDATASEG table. As you may know, loading data into ICT module is now done by using attribute columns defined in the target HFM application:
That's not my case so the variable stores "DEFAULT" column type: 
Quick note: I think there is a but in this code. As can see it checks HFM adapter fdmFM11XG6B twice. Maybe the wanted to say when the target adapter was either fdmFM11XG6A or fdmFM11XG6B or fdmFM11XG6C.

Printing variables (Step 14)
Key variables are logged to the migration utility log:
 If something went wrong with the migration utility, I suggest you start by checking these variables:

Initialize migration utility classes
If you are curious and you really want to see what's done in every step, then you can check Step 13. There you will see that the core process code has been built using Jython :-)

Create target application in FDMEE
Now that we are familiar with ODI...let's see how the target application CONSMINDEV has been created in FDMEE.
Steps 15 to 23 shows what ODI does to create the target application:
It's remarkable that the target application has not to exist in the system in order to create. This process assumes the target application already exists. In my case, actually it did not exist when the migration utility was launched. And consequently, the migration utility is not checking if you have access to the application so you can register. Bear in mind that the migration is done through the database layer so no security is applied at application layer.

Create Target Application (Step 15)
This process basically performs some checks in both FDM and FDMEE databases in order to validate the application.
The code to create the application is in function createTargetApp. If you have any issues creating the application you can always get the code from step 13 (Init Process)
If you review the code, you will notice that application is created as "CLASSIC". There are couple of notes in the migration utility guide regarding this topic:

Create Target Application Options (Step 21)
This process creates the target application options with default values. This means the migration utility does not migrate them from FDM Classic adapter options.
You can see in the SQL code to insert the options how default values are taken from AIF_LOOKUP_VALUES table:
For HFM applications, the lookup type 19 refers to all valid options. In FDMEE, target application options are very similar to target adapter options in legacy FDM. There are just few differences like Sticky Server option which is now taken from the EPM registry if applies:
In FDM, options were stored in table tCtrlOption. As you can see below in my source FDM application, the target adapter for the HFM application had option "Intersection Validation Report" set to Yes (OptionSwitch = 1)
The default value for this option in FDMEE is No (ATTR2 = 0):
Consequently, it is highly recommend to review the target application options after they are migrated:
Create Target Application Dimensions (Steps 22 and 23)
Once target application options have been created, the application dimensions are created in the target application. This procedure is performed in three steps:

1. Extract dimensions from FDM target adapter
Dimension details are stored in FDM table TINTDIMENSIONDEF:
One of my concerns was about default dimension names being used in FDM adapter. It's very common that you find prefix FM in Account, Entity, Scenario and Year dimensions when migrating FDM apps. It would be great if the utility removes the prefix when used. I say that because I already got feedback from migrations having issues because of this. As the dimension names in the target HFM application do not match dimension names in FDMEE, you will get connection issues between FDMEE and HFM :-(
So you either change in FDM before migration or (late?) you change in the FDMEE database after migration.
I haven't tried but you could also refresh Metadata from target application in FDMEE. This should update the dimension names and also update the custom dimension names if you are using dimension aliases in HFM.
2. Update target application dimensions for special configurations (HFM application with ICT dimensions, HP multi-currency enabled, etc.) 
Different checks and actions are done in
The code to create the application is in function updateAppDimensions. If you have any issues with dimensions you can always get the code from step 13 (Init Process)

3. Update dimension IDs for HFM target applications

Ok folks, we are close!
In the next post I will show you why artifacts were not migrated :-)

Did you already use the utility? any feedback?

Take care!