Wednesday, December 24, 2014

Merry Xmas and Happy New Year!

Folks,

I would like to wish you a Merry Xmas and a Happy New Year!

I hope this new year brings all the best...including all features you expect from FDMEE :-)

I wish I had more time for posting but helping others in this time is also great!


Francisco

Tuesday, November 25, 2014

Importing files with different File Charset Encoding - Part 1

There are multiple file character set around there...
I'm sure you all are familiar with concepts like UTF-8.
Have you ever seen weird characters when importing data into FDMEE or even your file did not get imported event if there were no errors apparently?

Before starting I would like to share some points that made me better understand all of this. So I hope it also helps you.

Types of strings in Python/Jython
There are 2 types of strings: 
  • byte strings
    • string elements are called bytes
    • there are only 256 possible bytes
  • unicode strings
    • string elements are called characters
    • there are over a 1.000.000 characters defined in Unicode strings
    • very useful because we can store almost any character and can be easily manipulated
How can we convert a unicode string to a byte string?
The word we are looking for is "encode":
"An encoding is a representation of a unicode string"

Note that not all encoding support every unicode character but some subset of unicode. For this reason UTF-8 is a good one (Universal Character Set Transformation Format - bit). It supports everything or, in other words, it defines a sequence of bytes for every unicode character. In fact, it's the default encoding for FDMEE.
So after understanding above we could say that an encoding is essentially a mapping table which translates every unicode character into one byte or a sequence of bytes. And the mapping table for UTF-8 is the most complete one :-)

If we read a byte string (typically from external sources), we need to decode it in order to manipulate it as a unicode string.

Aligning concepts
Now that we have a clearer idea about these concepts we could clarify something. We typically say that either text is ASCII or UTF-8 or UTF-16 (I used to say it), and therefore bytes are text. However text is only text. When we store text we actually should talk about encoding that text into a sequence of bytes. If we talk about images, there are many different ways to encode images into bytes. I'm sure you are familiar with JPG,BMP, etc. In the same way, there are many different ways to encode text into bytes...UTF-8, UTF-16, ASCII, etc.
Once we encode, bytes are just bytes. If we want the original text, we will have to decode.

To summarize
  • Encode: Unicode > bytes 
  • Decode: bytes > Unicode
Please take into consideration that we are discussing this topic in the Jython 2.5.1 context. I say that because you may find differences if you have a look to Python 3.x where byte and string are separate types (I would recommend this article if you are interested in seeing the differences)
I don't want to confuse you (even myself) so we will skip this as FDMEE 11.1.2.3 uses Jython 2.5.1.

File Character Set Option in FDMEE
FDMEE knows source files can use different encoding charsets depending on many different factors like source system generating the file, regions, people, etc. For this reason it provides the option File Character Set (FCS) at three different levels:
  • System Level (Profile File)
  • Application Level
  • User Level
Lower levels override higher levels. For example, we could have one generic FCS, another one for some target applications, and a different ones for different users loading data to same applications:
I woull say that this option includes values for all encodings:
Today I'm going to discuss about how we could manage a different scenario in FDMEE: having same user loading different files with different FCS to same application.

Our Scenario
Our FDMEE application is designed so we have one location used to import files generated from their Legacy System PIOLIN. These files use UTF-8 encoding.
In addition to this, the user responsible of loading PIOLIN data is also responsible of loading HFM eliminations into their HP application. The HFM data is extracted using Extended Analytics (EA). The file generated uses encoding UTF16-LE and it is compressed using GZIP compressor.
Therefore we have:
  • PIOLIN files using UTF-8
  • HFM files using UTF16-LE
  • HFM files compressed with GZP (filname.csv.gz)
Our First Solution: use different users with different encoding
Let's say that we configure FCS as UTF-8 at application level. 
It would be great if we could setup different encoding for different source files but unfortunately we cannot with the current release (11.1.2.3.520)
This is because FCS option cannot be currently defined at Import Format Level. If we could do that, then our solution would be straight forward.
Let's think then. It seems that the only solution would be to create additional FDMEE users because if we set the FCS value at user level we would be overriding both System and Application options. So for each user I could define a different value for FCS option and import would be working as expecting.
I'm not going to ask if you like this solution because I don't :-). Just imagine a FDMEE user having to log off and log on to import different files. Not good.

A Better Solution: convert source file encoding to FDMEE's one
If we have our generic FCS set as UTF-8 in FDMEE, why not converting our source file encoding to UTF-8? This is a very common approach when integrating data from heterogeneous systems. It's quite normal that they use different FCS so conversion and standardization is commonly needed. 
Is that possible? Is it easy?
Nothing is straightforward but with a bit or creativeness everything is possible. 
We have already discussed about how power is Jython for FDMEE scripting.
Using import scripts would be an option but I don't like the idea of having one import script for each import field. Performance of applying multiple import scripts may not be good.
However, we know that we have an Event Script called BefImport that is executed just before our source file is imported. So what about using this event script to convert our source file encoding before it is imported?

My Source File
I will forget about GZIP compression for the moment and I will just focus on my source file:
If we have editor like Notepad++ (my best friend) then we can easily see which is the encoding used:
My file uses UCS-2 Little Endian which is the older version of UTF16-LE (you can check it here)

My first import
Before building any solution we will see what would happen with my import when FCS option is set to UTF-8 and my source file uses a different one.
So after configuring our FDMEE artifacts (Location, Import Format, Data Load Rule, etc), we import our source file and we get a grey fish:
End user would go to the Process Details page and see something that he might not understand:
It says data was imported successfully and there is a warning for the mapping step.
So if data was imported, why don't I see it? it's time to look through the FDMEE process log:
When we see all error messages above and weird characters, we can suspect that it will be probably related to file encoding. It seems that FDMEE was not able to read te file correctly. 

A quick note on this: the Process Details page will not show that mapping step failed because there was no data imported and therefore no data to be mapped. With the time you will learn how to interpret different scenarios like this. I know business users would appreciate more descriptive and intuitive messages in the Process Details page without having to open the log. I'm not inventing this, they just told me :-)

You may want to listen some advice about this: don't try to build any solution at first step if you have potential format issues, just use Notepad++ to convert to UTF-8, or change the FCS option accordingly. Once you confirm data is imported, you can start thinking in the solution.
For example, if we import the same file after updating FCS option value to UTF16-LE:
We can see our data is successfully imported:

We now know our issue is related to charset encoding so let's think about the solution...

Friday, November 7, 2014

WoW you can finally set the Problem Type for FDMEE Service Requests in Oracle Support

Finally, it's here.

It's not a very interesting topic for Blog but it deserves it.

So far we could only create SRs and assign generic term "Technical Issue" to it.

I was creating a SR today for a customer and Oh my God!
Would this help to speed up problem resolution? we will see :-)

Friday, October 31, 2014

Cloud IDE for FDMEE Scripting. Make it your friend!

It has been a hard month...too busy with customers, surgery to pull put my wisdom teeth, recovering from ultra trail marathon... at least we still enjoy summer time in Malaga!

Today I would like to introduce some tools that can be useful for you to learn the scripting languages of FDMEE:

  • Jython
  • SQL
We all have heard about Cloud and Cloud...did you know there are IDEs (Integrated Development Environment) in the Cloud?
What is that? I will try to make it simple. It's a web where you can partially test your FDMEE scripts.
Why I say partially? Because don't think you are going to copy&paste your script, click Run, and voilà. 

But what about situations where you need to build an import script to parse data lines to get the account, or convert string to dates, or create a #SQL mapping which strips source Entity? These tools can help you to reduce time cost of implementation. Sometimes you need to run the import step 10 times to get your import script working and that makes me crazy.

I still need to say that I prefer Notepad++ to run my unit tests but Cloud IDEs are also a good tool.

Today I will show you how you can use Python Fiddle to easily test string operations for your FDMEE scripts.

Navigate to http://pythonfiddle.com/
Navigate to the site and enjoy. See what it can offer like some code snippets that may be useful for you but what I would suggest is to initially think of this site as a learning place.
Create your login
You can use your Gmail account or social network accounts:
Your first example
So I need to parse the entity code from my source file and not sure about how to do it. You could start building your script, running the import, fixing the script, running the import, see results..
Let's use the Cloud IDE for the following case:

  • My source Entity looks like XXXX-YYY-PLANTN
  • I want to import into FDMEE the value XXXX_PLANTN

As you can see I have divided the script in two parts:
  • The function for my import script
    • Split source Field and get first and third item (indexed starts at 0)
    • sField.split("-") will return a list of three elements ["XXXX","YYY","PLANTN"]
    • If use [n] to get the nth element
    • We use "+" operator to concatenate
  • Test code for sample source Entity field and source Record. I have used print to show the result in the bottom panel.

What is the objective?
  • Run the script
  • Fix syntax errors
  • If we have unexpected result, fix the script and try again
  • Everything works? Copy & Paste the function definition into your import script and run a final test with the entire source file.

In the example below we get a syntax error when we click Run: we forgot to close the string with "
Once I have fixed my script I get what I wanted: 0000_PLANT1
You can also save your script and share with other users if you wish:
This can be useful if you want to save all your code snippets in case you need to reuse them. Once it is saved you can always access to your Dashboard and open your script:

Some additional notes
As always, don't forget FDMEE uses Jython so it may be that some functions or coding solutions may not be available for Jython and they are for Python. In theory you can use Python 2.5.1 based code in your Event and Custom Scripts. However as explained in previous posts, Import Scripts uses Jython 2.2.1.

Regarding SQL, let's say that you don't have access to any SQL tool and you cannot test the SQL query you are going to use either in your BefImport for Open Interface Adapter or in a mapping script. There are also Cloud IDEs for SQL like http://sqlfiddle.com. You can define your own database/tables online and import some sample data there.

Once you get into advance mode you may prefer to use Eclipse or Notepad++ but at least you know there are other options to speed up your learning process and implementation time.

Other Cloud IDEs? Sure. And it's now time for you to select your preferred one. In this site you have a good review of different ones.

Enjoy!

Monday, October 13, 2014

ODTUG Board Of Directors Election

Hi all,

2014 has been the year in which I finally jumped to the other size of the pond. There have been lot of people making this possible besides me :-) but I wanted to thank you ODTUG for opening their arms and make me feel as part of the family.
Now it's time to vote for the ODTUG board! 4 positions for 22 candidates. I'm sure all of them are fantastic and will put all their effort making ODTUG even better for all of us. Cameron, Tim,... and Gary who is a really strong candidate and has demonstrated his value to this group
Vote here!


Wednesday, October 1, 2014

Tips & Tricks: Using filter on Locations when creating HSS Groups

Accidental discoveries. That's how I would define it.

Since old times we have had amazing accidental discoveries that have made our life easier & better: penicillin, microwave, velcro, teflon, vulcanized rubber, coca-cola, viagra...

My discovery is not as important as the ones I mentioned (I wish) but it can help you to mitigate some of the drawbacks of new Location Security functionality in FDMEE. Remember that this is just something that may or not be helpful for you. the aim is just to share a funny finding.

Before starting, just to confirm that location security is optional. You will be asked to enable location security when you navigate to Security Settings > Tab Location Security:
A brief introduction about Location Security
I'm sure you were disappointed with the way security was working in FDM Classic. Provisioning users in HSS (Hyperion Shared Services), then going to FDM User Security, add the user, select which locations he had access, bla bla

In FDMEE, Oracle tried to get rid of this long process and re-design the location security so you just have to add users to groups in HSS and forget managing user security in FDMEE as well.
In a nutshell, you define and trigger the creation of HSS groups from FDMEE Security Settings, then provision users to these groups, and that's all. FDMEE will do the rest.

But these groups are particular...why? they have specific naming convention defined from FDMEE:
PREFIX_LOCATIONAME_SUFFIX
At run-time, FDMEE will check the groups the user accessing belongs to in order to determine which locations he has access. This is how new location security works.

I have to say that I like the way the new Location Security is managed but I don't like how the security groups are created in HSS. I don't like this functionality not being as flexible as expected. Number of groups created can explode in Shared Services as FDMEE creates groups for all locations.

How it works?
Did you get it? Let me provide some notes about the process.

1. Navigate to Setup Tab > Security Settings > Location Security Tab (only admins or roles having access to Security Settings). Then setup in the templates for the HSS Groups based on your requirements:
A template has:
- Prefix: any but don't use location names as prefix
- Suffix: any
- Roles assigned: check the roles you want to assign to the HSS group. You may need to configure Role Security as well if required.

2. When we click on Maintain User Group, FDMEE will create one group in HSS for each template|location combination. In other words, let's say that we have 3 templates and 200 locations. FDMEE will create 3*200 groups in HSS...
The information message showed to the user is clear enough:
In the example above we have 3 templates for HSS groups and many locations. What is the result? something as follows:
As you can see, you may have many groups that you don't need. Some of the names may sound senseless due to your prefix and suffix definition so you can just leave or remove them. If you remove them, they will be re-created from FDMEE when you click again...
BTW, did I say you need to be HSS admin to create groups from FDMEE?
You have to.

3. We have a user named TyrionLannister who only needs access to location SAPECC_NEWGL_HFM having Data Integration role.
I will add Tyrion to HSS group FDMEE_SAPECC_NEWGL_HFM_Data Integration:
4. Tyrion accesses FDMEE and try to select locations from the POV:
As expected, he has only access to location SAPECC_NEWGL_HFM_Data Integration.

What FDMEE does?
If you have a look to the ERPIntegrator log you will see what FDMEE does to manage location security:
What about having a user assigned to more than one group?
If user belongs to HSS group MyPrefix1_LOCATIONA_MySuffix1 and MyPrefix2_LOCATIONB_MySuffix2, he will have access to LocationA with the roles assigned to template MyPrefix1_<LOCATION>_MySuffix1 and to LocationB with the roles assigned to template MyPrefix2_<LOCATION>_MySuffix2.

Nice to have
In my opinion the approach is much better than FDM's one but, as I already said, the way groups are managed/created could be enhanced. I know Oracle is working on that but here are my recommendations.

  • When the administrator click Maintain User Group, he should be able to select which locations he wants to create the groups for
    • All Locations
    • Filter locations like "Locations starting with", "Locations containing with"...
    • Select location one by one
  • These options should be available for each group template or enable administrator to maintain user groups only for templates selected in the grid.

What do you think?

Accidental discovery
And now let's go to what I wanted to tell. One day I was playing with Quick Search filters in the Location Page and I navigated to Security Settings page without removing the filters.
What happened when I created my group templates and clicked Maintain User Groups? It's better you see it. If we go to HSS and search for the groups having prefix SAPBW and suffix FILTERED:
The group template was only applied to locations filtered in the Locations page!
Unfortunately, there is still one issue: 
As you can see, all groups were created even for the filtered locations. Maybe if Oracle adds a filter to Security Settings group we are done :-)

BTW, there is a technical reason of this happening. The process of creating user groups is using an internal list for the locations and that list is the list of locations shown in the Locations page.

That's all my folks! We will discuss another day about role overlapping between users and groups, and more details about location security.

Enjoy!

Friday, September 26, 2014

I'm new to FDMEE - Part 1. How do I start? What should I know?

Hi all,

I know I should have posted this entry long time ago but it's never too late if that is good. And I promise my colleague Cameron.

So are you new to FDMEE but have experience with FDM Classic? totally new to FDMEE? end user? developer? administrator?... many questions right? To be honest I don't like making any difference between users although I understand they often want to know different functionality of the product. Basically, they want to know only what they need.

When I started with FDM Classic/Upstream some time ago, I didn't have anyone sitting with me showing the product, its functionality, examples, etc. How did I learn? patience, time, errors, troubleshooting, getting some knowledge from gurus...and projects! I liked the product but honestly what I like more was the fact of being a solution architect for integration. And I say that because I would recommend you not to loose sight of this stream. You can learn how to create an import format, mapping, logic accounts, etc. but the essence is to design the best solution for your requirements, and then build it. In other words, don't ask yourselves only the what and the how but also the why.

I have always tried to let customers know that the data integration is so important as their functional applications like HFM or Planning. In fact, data integration is essential. You can have the best Planning application, with the best business rules, and the best forecast process in the world but if you don't have the data you need for that model to work, you have nothing. Why you want a Ferrari if you don't use the best fuel? It's just my humble opinion.

FDMEE has several functionality. When I was a student my mum always suggested to start with the easier questions of the exam. She said this was a way of gaining confidence. I will make the same recommendation to you. For example, don't try to learn how to create a complex mapping logic in FDMEE if you don't know how mappings work. With the basics you will build the complexity you want.

Resources
Is the admin guide enough to learn? Not, absolutely not. You can use the admin guide as support to understand the functionality but they guide is not going to show you how you have to design a solution. It does not say how to solve issues neither :-(

In this first post I will encourage you to discover what FDMEE does

  • Read the first chapters of the admin guide without focusing on the technical details like database or folder architecture. You will have time to learn these topics.
  • Go to Oracle site and see the product data sheet
  • Go to any partner's site working with FDM and see their product introduction
  • Visit OUG groups and subscribe in order to get access to presentations (ODTUG, UKOUG, etc.)
  • Visit blogs (fdmguru, thinkfdm, mine...)
At the end of the day you should have gathered enough information to have a first understanding of the product. Let me add some points to your findings.

Some Introduction...
If you take any sales presentation or demo you will learn about which values FDMEE can add to your business.
I will wrap it up in five points:

  • FDMEE is more than a ETL tool
  • It provides audit trail functionality to source financial data
  • It helps to ensure data integrity and mapping consistency that allows easy reconciliation of financial data
  • It help users with data error investigation, identification, and correction
  • It provides flexibility to meet all levels of complexity in your integration
Only financial data? not really. You can also process non financial data. It all depends on the data you need for your target model.

So technically, What does FDMEE do?

So you now know what the product provides to your business :-) It's time to move to how we work with the tool.
Being said all this, my first recommendation would be to start asking yourself: what does FDMEE do?
And here is where my favorite approach comes... High Level > Detailed Level.

As I said, FDMEE can do lot of things, but what is the main purpose?
Level 1: Get data from a source and load it into a target application
Level 2: Get data from a source, map to my target dimensional model, validate, and load into a target application
Level 3: Get data from different sources in different formats...

This is the way I work and the way I like (not necessarily the way you do). High level explanations always reach all stakeholders regardless they are business users, techies, developers, etc. Detailed explanations may vary based on your audience.

In the following picture you will see the normal data load process for FDMEE users:

The objective? users have to move the fish "up stream" in the header of the FDMEE web in order to get the valid data into the application(s). The process consists in 4 steps:
  1. Import data from your source into FDMEE. During this step mappings are executed so your source data is converted into your target model (dimension mapping). For example, you Cash account is 1010 in you source DWH data and Cash in your HFM application. Then you create mapping rule to convert 1010 into Cash member.
  2. Validate that all your source data have been successfully converted. If you have data for source account 1010 but you did not create a mapping rule for it, you will not be able to progress.
  3. Export your converted data to your target application
  4. Check (optional) will apply a set of rules to enforce data integrity. For example, you may want to ensure that data you loaded to HFM is balanced or specific balances are signed as expected.
A must to know :-)
A bit of history


Why fishes? fishes are not new in FDMEE. They were always there since old times of UpStream. In 2006 Hyperion acquired Upstream and re-branded it as Financial Data Quality Management - FDQM (FDM for friends).
In 2007, Oracle acquired Hyperion so re-branded it as Oracle|Hyperion FDQM. And finally, in 2013, we had the first release of FDM-Enterprise Edition, our lovely FDMEE.
Folks, I think that's enough for today. Next time, we will start looking at how FDMEE processes our reference source system...flat files! FDMEE extracts data from different source systems in different ways but the 4 steps we mentioned are common for all of them so as I said before, let's keep things simple, and files are simple, aren't they?

Enjoy!



Friday, September 19, 2014

ADF Patch 19586149 for issue "Excel exported from Data Grid has only headers"

ADF...do you get any error message where this word appears?

I had several customers facing the following issue. When they tried to export data to Excel from Data Load Workbench, the output file was empty, just headers.

The problem was that this issue was happening randomly. For small sets of data was working, then for large was not unless we applied a filter in the quick search boxes. Then suddenly it was working for large sets...

The bug was not coming from FDMEE but from ADF and luckily we now have a new patch to solve that issue and few more:


We have already applied the patch in most of them and they confirmed data can now be exported to Excel with no issues :-)

Tuesday, September 9, 2014

FDMEE PSU520 released (11.1.2.3.520)

From the creators of FDMEE PSU100, PSU200, PSU500, and PSU510, we now present... PSU520!!!
I know lot of people have been waiting for this the whole summer. Honestly, I haven`t. I was just thinking of enjoying summer time :-) But we finally have it here so it's time to see what is going on.

Before installing the patch and discussing about new features let's see defects fixed in this PSU (+40):
I have faced some of these issues but I would like to highlighted one known issue related to an ADF bug. I had several customers getting empty Excel when exporting data from Data Load Workbench, We finally have a workaround:


18665000 – Export to Excel works for small data sets, but not for a large data set.
Workaround: First, in the Data Load Workbench, display the data to export to Excel. Then, click on the last row of data. (In some cases you will have to scroll down through many pages to get to the last row.) Lastly, click the Export to Excel icon.
Installation
Applying this patch is similar to PSU510. Tasks to be performed are well described so I didn't expect any problem installing it. If you are coming from previous PSUs you can skip most of the steps:

1. Apply ODI Pre-Requisite Patch 17178347 (SKIPPED as it was already done in PSU510)

2. Stop the FDMEE service

3. Apply patch 18202833 using opatch: 

Unzip patch in folder %EPM_ORACLE_HOME%\OPatch
Then apply path as follows:

opatch apply 18202833 -oh %EPM_ORACLE_HOME% -jre %MW_HOME%\jdk160_35

Example for Windows:
opatch.bat apply E:\Oracle\Middleware\EPMSystem11R1\OPatch\18202833 -oh E:\Oracle\Middleware\EPMSystem11R1 -jre E:\Oracle\Middleware\jdk160_35
Example for Linux:
opatch apply /u01/Oracle/Middleware/EPMSystem11R1/OPatch/18202833 -oh /u01/Oracle/Middleware/EPMSystem11R1 –jre /u01/Oracle/Middleware/jdk160_35

In my case:
Note: as all database changes and ODI scenarios are updated during the FDMEE Server startup after this path is applied, ignore errors in the FDMEE Server log due to existing objects in the FDMEE database.

4. Add rewriting rule to the OHS configuration file (SKIPPED as it was already done in PSU510)
If not using OHS, have a look to this Support Document.

5. Register the HFM Adapter
As we already applied 11.1.2.3.XXX, we first need to unregister the HFM adapter:
We can then register the adapter again either using the EPM System Configuration tool or the command RegisterHFM.vbs. I prefer second option as I see all messages detailed:
6. If we are running JDE or SAP BW Adapter import the Adapter Projects for them.
Important Note: this step shouldn't be performed if you already did either in PSU200, PSU500, or PSU510. The two ODI adapter projects have not been changed since PSU200. Patch readme can be very confusing and take you to undesired scenarios:
Step 6 is missing following statement as other steps does:
"This step is not required if you have already performed it as part of the 11.1.2.3.XXX patchset update application"
What can happen if you perform this step and you already have configured/customized/used the adapters?
I will show you a situation you would like to avoid. 

In my case, I have already generated the ODI Scenario from JDE import format, and configured the SAP BW adapter from FDMEE (so generated the ODI package template, interface, and scenario)

a.    Log in to the Work Repository FDMEE.
b.    Go to Designer->Project.
c.    Select Import Project, then IMPORT_UPDATE for mode (it is actually INSERT_UPDATE), and then Project File from the directory:    C:\Oracle\Middleware\EPMSystem11R1\products\FinancialDataQuality\odi\11.1.2.3.00\workrep
When we use INSERT_UPDATE mode, we are warned about some objects to be deleted/replaced:
Note: When warnings are prompted for object replacement, you need to be aware that ODI objects created either from ODI or FDMEE (Ex: Package, Interface, Scenario for SAP BW) will be deleted

My previous work has been deleted :-( 
What happened above was due to importing the two ODI projects. This is not supposed to happen as you don't have to import the ODI projects if you already did in previous PSUs (at least in PSU520)

But in the future it can happen that Oracle updates the ODI projects so you have to import them in order to have the new versions.

In these situations, I think that documentation should state that before importing new ODI projects, you should export Packages, Scenarios, Interfaces, and Variables or any other ODI object which were created during your FDMEE configuration or customization of these 2 adapters (you can use Smart Export from ODI). Then after importing the new project adapters (if needed) you can import your objects again. 
In case new versions implies you need to regenerate ODI objects from FDMEE, then documentation should inform you accordingly.

If you don't backup ODI objects that FDMEE generated or you customized:
  • If you have ODI customization (I didn't have in this environment) and you don't back up your ODI customized objects, then you will have to recreate them from FDMEE after adapters are imported
  • If you don't have ODI customization and you don't back up your ODI objects
    • For SAP BW: Generate Package Template + Regenerate ODI Scenario from import formats. You don't need to import info cube definitions from FDMEE again as ODI models are not deleted (models are not present in the SAP BW Adapter because they are created as needed when info cube are imported from FDMEE Source Adapter page):
    • For JDE: Regenerate ODI Scenario from import formats. Models used by the adapter are already delivered by Oracle in the JDE Adapter so you don't need to worry unless they are additional models for tables you reversed from ODI.

Note: if you have customized any of the two adapters by following instructions in Support Doc 1920095.1, then you would have to perform backup + import new adapters in the Development Work Repository you created for Customization.

7. Update the EPM Instance with the new report batch scripts (SKIPPED as it was already done in PSU510)

Windows:     copy: %EPM_ORACLE_HOME%/products/FinancialDataQuality/bin/runreport.bat.template 
to: %EPM_ORACLE_INSTANCE%/FinancialDataQuality/runreport.bat

Linux. Copy the following files:
runreport.sh.template
runbatch.sh.template
- encryptpassword.sh.template

8. Start FDMEE Service
DB changes are applied and ODI Scenarios are imported. As described above you don't need to worry about errors raised when creating objects already existing in the DB:
9. Restart the HTTP Server
Once we finish, we can check the FDMEE current version from Workspace:
New Features
Although we know that most of the features we are waiting for are coming in 11.1.2.4, this PSU brings some of the missing ones:

Point of View Lock for All Locations
This feature was in the must list. All FDM Classic users were expecting it. With this new functionality FDMEE administrators will be able to prevent other users from loading data to locked POVs in different business processes like month closing.

What can we lock? lock all locations related to a target application for a given Period and Category

How do we lock? there are two new buttons "Lock POV" and "Unlock POV" in Application Settings
When we click in Lock POV button a window is prompted so we can select Period and Category to be locked for all locations
When we click OK we get an information message:
How Lock POV works? when running a Data Load Rule from Data Load Workbench, Data Load Rule page, and Batches, FDMEE will check if the period and category being processed are locked or not.

Loading data for a locked POV
When we run the DLR we get an error message (even if we are administrators):
Example above was for a Location which we did not process data for January 2006 - Actual. 
In the same way, if data has been already processed and the POV is locked, we will get the same message:
I really miss an icon in the POV bar showing the POV is locked :-(
We had this functionality in FDM Classic:
What happens internally in the DB?
Once we click OK after selecting Period and Category, table TLOGPROCESS is updated so column INTLOCKSTATE is set to 60 for all entries where PERIOD and CATEGORY match the ones we selected for our target application.
INTLOCKSTATE = 60 > Locked
INTLOCKSTATE = 50 > Unlocked
But what happens with new DLRs which are not executed yet?
This may not be the correct question. I would prefer, what happens when we lock a PERIOD-CATEGORY combination we never processed for our target application?
For example, I will lock JANUARY 2007-ACTUAL. This is a POV I never processed:
When we click OK: 
We get the message:
POV cannot be locked because data was never loaded from any Location to this POV
So basically we can only lock PERIOD-CATEGORY combinations that we already processed for any Location loading to our target application. In other words, PERIOD-CATEGORY can be only locked if there is any record for that combination plus target application in table TLOGPROCESS. Therefore I cannot prevent users loading data for new PERIOD-CATEGORY combination using this new functionality. I will have to wait until they load data for the first time and then lock that POV.

How do we unlock a POV for all Locations? 
Before unlocking, I'm going to lock the following PERIOD-CATEGORY combinations:
* January 2006 - Budget
February 2006 - Budget
So I have already locked the two combinations above plus January 2006-Actual. The following SQL query shows my POVs locked:
When we try to unlock we don't see combinations locked as we select PERIOD and CATEGORY separately:
So how do we know which PERIOD-CATEGORY combinations are locked?
In fact, I will try to unlock February 2006-Actual which is actually not locked:
Surprise, surprise... POV has been unlocked successfully!  
But February 2006-Actual combination was not locked :-)
Let's unlock February 2006-Budget:
After seeing the same above information message  (so period unlocked) I will run the same SQL query:
As expected, INTLOCKSTATE has been updated with value 50 for all records in TLOGPROCESS matching our target application and PERIOD-CATEGORY combination unlocked.

Enhancing Unlock POV
To be honest, I was expecting a table like TPOVLOCK where we have combinations of PERIOD-CATEGORY locked by Target Application. I'm not going to question why Oracle designed this functionality in that way but I know administrators will be saying..."How do I know which POV Period-Category combinations are locked?"
I see Custom Scripts a good approach. You could even build your own Custom Script to unlock POV PERIOD-CATEGORY.
Once I build my Custom Script, I try to execute it...

Hey Hey, where is Script Execution link? I'm administrator so I should see it but I don't :-)
So we probably are facing an important bug. As a workaround, I have just setup an Intermediate-2 user and enable Script Execution for Intermediate-2 Role...but I don't see the menu entry neither:
If you never saw this option, here you have it from 510:
NOTE: Once this is fixed, I will update this part.
At the moment we already got confirmation from Oracle about this bug. They will be releasing a patch ASAP (1-2 days)
Anyway, think that if you know how locked POVs are stored then you can use custom scripts for any administration task related. For example, the following custom script use a parameter to show locked POV PERIOD-CATEGORY combinations:
NOTE (10/SEP/2014): Oracle just confirmed the patch for this bug (19579637) 
Lock POV for Multi-Period
I was wondering what would happen with Multi-Period files. In the following example I have a DLR for Multi-Period file having 3 periods:
* January-2006
* February-2006
* March-2006
When we execute the DLR from DLR Page:
Is this message because having January 2006-Actual in my current POV bar? or because January 2006-Actual is a column in the Multi-Period file?
As always, any answer needs a test so I change current POV Period to be July 2006:
Woops, we don't get any error prompted but we can see status is FAILED.
If we navigate to Process Details page, we can see how it failed but no error messages/information (bug?):
Let's have a look to our friend ODI Operator:
The DLR was executed but it failed because January 2006-Actual was locked.
And finally, what our friendly FDMEE process log has to say?
As we were expecting, the DLR failed when getting the POV list. Basically, it gets the POV list to be processed and checks if PERIOD-CATEGORY combinations are locked.

I like stress testing so what about if we lock February 2006-Actual and unlock January 2006-Actual. Would it process January 2006 or stop before processing periods?
My locked POV will look now as:
If we execute the Multi-Period DLR:
Those who predicted that data was not going to be processed for any periods were right. In fact, we saw before how the error is raised at Pre-Import step, meaning that this check for locked POV is performed before data is processed.

POV Lock in FDM Classic
I wouldn't like to leave this new functionality without comparing to what we had in FDM Classic.
If we have a look to the old FDM admin guide we can see details about POV Lock settings:
If you see listed options, you will notice that we could lock the current POV, being the POV the combination of Location-Period-Category and not only Period-Category for all locations. In other words, we could lock the POV for specific locations. However in FDMEE we can only lock POV (Period-Category) for all locations and not for individual ones.

What was I expecting here? the functionality to lock POVs so we can lock specific Location-Period-Category-Data Load Rule. Maybe in 11.1.2.4? I would really like to see a "POV Management" menu entry. And I know FDMEE administrators would love it as well.

Being said all this, I think that's enough discussion about Lock/Unlock POV...

Global User for Application Access
This is a big step forward, something demanded since years. Are you familiar with this issue?
It was inherited from FDM Classic. 

Basically, only a user with Administrator access can export data to Planning and Essbase
With this new application option, we can specify a global user that will be used to access the Planning/Essbase application instead of the FDMEE sign-on user. This global user should have administration access to the applications so he can lock/unlock Essbase objects like Rules file.

We can specify the global user from Register > Target Application > Application Options:

This new feature will be totally transparent for end-users as they don't know which user is loading data to Planning/Essbase.

PS: what do you think about security if using this approach? 

JavaScript supported Drill-Through
Before this new patch, FDMEE only supported http based drill-through.

Let me describe an example:
We all know we can drill-through to SAP Web Client, but what about drilling-through to SAP GUI (Desktop Client)? It was not possible until now. We just have to write the appropriate JavaScript and specify from our source system.
The same way we do with http drill URL, we can pass contextual parameters to a JavaScript.

If we navigate to Script Editor page we can see new Script type "Drill-Through" and Technology "JavaScript":
Here is dummy example to show Description-1 value when I try to drill through using JavaScript:

1. Ensure that Application Root Folder is correctly configured at System level (System Settings). New JavaScript drill-through is defined for source systems so they will be saved at system level rather than application level.
The javascript will be saved in Application Root Folder (System Settings)\data\custom\yourscript.js
2. Navigate to Script Editor > Drill-Through > New. Enter script name to be created (with no extension)
3, Write or paste your code. The script function name and the filename should be the same. In my example, I'm just showing a message with Description.
FYI I have written my JS file in Notepad++:
And then paste it in the script I created from Script Editor:
As described above, the JS script is stored at system level:
And that means that you will be seeing the script in the Script Editor under Drill-Through scripts regardless your target application. But they will be the same script.

NOTE: there is a known bug that will be fixed along with "Script Execution" one, we need to copy the JavaScript file to scripts\custom at Application level as well. This step is currently needed to make the JavaScript working:

4. Update the Drill URL in source system page to be #javascript
5. In the Source Adapter or the Import Format for file, update Drill URL with script call:
testDrillThruJS('$DESC1$')
Single quotes are only required if you want to pass the parameter as String.

6. Testing new JavaScript Drill-Thru
Import Data and drill-through from any source amount:
And...
I have been able to show a message with column Description-1 (DESC1 in TDATASEG).
The same way we show messages, we can open local programs like SAP GUI :-)

NOTE: I just noticed that description is shown with + instead of white spaces. I will be investigating but in the mean time we can force replace in our JavaScript:
So we get what we expected:
And lastly, just a quick look to the ERPIntegrator0.log to see how this new feature works:

EBS Data Loads supported in Open Batches
We can now use Open Batch for EBS and PSFT data sources.

Support for JDE Tools 9.1.4
This version of JDE Tools is now supported. You have more details about configuring JDE integration in the admin guide.  

Other changes in new version
They are not new features but technical changes in FDMEE so I considered interesting.

HFM Adapter Action Scripts
There are some changes in HFM_EXPORT.vbs to fix some issues with Line Item Details and to add column order when exporting files to HFM:
Column order is now set as:
 "!COLUMN_ORDER=SCENARIO,YEAR,PERIOD,VIEW,ENTITY,VALUE,ACCOUNT,ICP,"&strColumnOrder
where strColumnOrder is replaced by all active custom dimensions ordered by column name ascending.

API
I don't want to anticipate anything but we may have some new functions like being able to insert custom information into Process Details Table. We will see once the admin guide is published.

ODI Scenarios
It seems that a new scenario has been added to refresh source values in FDMEE. This was probably performed by API before:
Report Templates
I don't see any new template.

As you can see, it took some time to fully analyze the new patch and its new features. Now it's time for you to install and stress testing as much as you can!

If you have any findings or any concern about 520 feel free to share with us :-)