Monday, December 5, 2016

Code Snippet: looping a ResultSet returned by API function

FDMEE provides a set of API functions which return a ResultSet object (getLocationDetails, getBatchDetails, getCategoryList, etc.). These functions are executing a SQL query in the FDMEE database behind the scenes.
Besides, FDMEE API function executeQuery can be used to execute our own SQL queries in the FDMEE database.

The following snippet shows how to loop a ResultSet object, get column values returned, and log details into the FDMEE process log. In my example. I've used a simple query to get internal id of Point-of-View components (Location, Category and Period) for current process id (load id)

How does it work?

Let's have a look!

Looping a ResultSet object returned by FDMEE API Function

 Snippet:       Loop records in ResultSet object (executeQuery)
 Author:        Francisco Amores
 Date:          12/06/2016
 Notes:         This snippet can be pasted in any event/custom script.
                Function executeQuery executes SQL queries in the
                FDMEE database
 Instructions:  Set log level (global or application settings) to 5 
 Hints:         Use this snippet to execute a SQL query with API
                function executeQuery and loop all records in the 
                ResultSet object returned by API.
                This snippet can be also used with other API functions
                returning ResultSet objects
 Example:       Get distinct keys for POV components
 FDMEE Version: and later

# Import section
import java.sql.SQLException as SQLException
import os # Optional

# SQL Query (Example -> get distinct partitionkey, catkey, and periodkey for specific load id)
sqlQuery = """
            FROM TDATASEG
            WHERE LOADID = ?"""

# List of parameters
# Emtpy list ([]) if no parameters (?) are required -> params = []
loadId = fdmContext["LOADID"]            
params = [loadId]             

# Debug query
fdmAPI.logDebug("SQL Query (params):%s %s (%s)" % (os.linesep, sqlQuery, params))

    # Execute query (returns ResultSet)
    # You can also use any API function returning a ResultSet
    rs = fdmAPI.executeQuery(sqlQuery, params)
    # Loop records if resulset has data
    if rs.isBeforeFirst():
            # Get column values
            partKey = rs.getString("PARTITIONKEY")
            catKey = rs.getString("CATKEY")
            periodKey = rs.getString("PERIODKEY")
            # Write to log (Optional)
            fdmAPI.logDebug("POV Keys: %s, %s, %s" % (partKey, catKey, periodKey))
            # Code executed for each record
            # ...  
        # No records
        fdmAPI.logDebug("No records returned!")    
    # Close ResultSet
except (SQLException, Exception), ex:
    # Error message
    errMsg = "Exception: %s" % ex
    # Optionally raise RunTimeError to stop the process
    # raise RunTimeError(errMsg)

Code snippets for FDMEE can be downloaded from GitHub.

Monday, November 28, 2016

Open Batch, putting some light on it!

after some discussions with different customers I got feedback about Open Batch documentation. It's true that is not clear and it omits some details. However, I also got feedback from Oracle about this being updated soon.

In the meantime I think is a good idea to put some light on it, especially on Import and Export modes available for batch files. I will try to keep things simple and will focus in the common scenarios.

Please bear in mind that this post assumes you are familiar with Open Batch concept. In case you don't, you can visit Oracle's documentation in this link.

Basically, an open batch is a type of batch definition which lets you automate/schedule workflow processing (So Import, Validate, Export, Load and Check). This functionality is typically used when source files are generated from external systems like SAP. If you come from FDM Classic World, this is the replacement of Batch Loader.

Open Batch file names
Open batch uses files which have the following information in their names:
  • POV: Point-of-View where data is processed (Location, Period, etc.)
  • Import Mode: how data is imported into FDMEE's POV
  • Export Mode: how data is exported into the target system (HFM, Planning, etc.)
They need to be located below openbatch folder (sub-folders can be also used):
Being said that, let's see how all fit in the file name. There are two naming conventions for open batch files:
  • Data Load Rule auto-creation: used when the DLR does not exist yet
  • Existing Data Load Rule: used when the DLR already exists in FDMEE
Data Load Rule auto-creation
This naming convention is used when we want to create the Data Load Rule on the fly.
Assuming delimiter is set to @ (valid delimiters are "@", "_", "~" and ";"):
  • FreeText is a free text which is also used to define the order of execution for multiple batch files.
  • Location is the POV Location
  • Category is the POV Category
  • Period is the POV Period
  • I is the Import Mode into FDMEE
  • E is the Export Mode into Target
  • ext is the file extension (typically txt or csv)
To use this approach you need to enable Auto Create Data Rule option in your batch definition:
In this case, we are using a specific sub-folder within openbatch folder: BATCH_FOLDER (this is very useful when you have multiple source systems and you want to have individual batches for them)
Let's say we have the following open batch file:
As a result of running the open batch, a new DLR has been created (LocName_Category):
Finally, you can check the FDMEE process log for details:
  1. What happens if we run the same batch file again? Nothing, FDMEE just reuses it.
  2. What happens if we disable Auto Create DLR because rule is already created? sames as above

Existing Data Load Rule
This naming convention is the most common one. Open batch assumes the DLR already exists in FDMEE so there is no need to create it.
Assuming delimiter is set to @:
  • FreeText is a free text which is also used to define the order of execution for multiple batch files
  • DLR is the POV Data Load Rule
  • Period is the POV Period
  • I is the Import Mode into FDMEE
  • E is the Export Mode into Target
  • ext is the file extension (typically txt or csv)
Why don't we need Location and Category? They are implicitly defined in the DLR.

Import and Export Modes
I tried to summarize the most common source/target import and export modes in the following tables:
When you run an open batch, you can see import and export modes in the process log:
Process Level
We can configure the workflow steps we want the batch to execute:
As you can see, there is no Import/Validate split any more. The reason is that, as you may know already, if we import data, then it is automatically validated.

Open Batch for Multi-Period
We can also use open batches when working with multi-period files (Excel or Text)
The naming convention for batch files will be:
FreeText@DLR@Start Period@End Period@IE.ext

I will cover multi-period batches in future posts.

Free Text, be creative!
Free text field is commonly used to define sequence when batch files are processed sequentially. However, it is also a good place where to have some useful information for our batch custom actions.

Let's say that we have batch files generated from external system like SAP. Our batch process has been customized to send email notifications with results after each batch file is processed.
To who? to SAP users who triggered the file generation.
How do I know their email address? we can have the user name in the free text field. Then, our custom process can easily connect to the LDAP Server in order to get the email address based on user name.

For example, after processing file:
FDMEE will send an email notification to which is stored in the active directory. Nice, isn't it?

Open Batch files for ERP or Source Adapters
Open batch files are commonly used for File to EPM interfaces. However, they can also be used with other sources (E-Biz, PSFT, JDE, SAP, etc.)
The only difference is that batch filename is empty as it is actually used only as "trigger".
For example, for Universal Data Adapter (UDA):
And the DLR is executed in the same way as having File as source:
FDMEE built-in scheduler has some limitations:
  • Can't be easily scheduled for high frequency (ex: every 5 minutes)
  • When executed, it creates a new process id (and therefore DB information, batch sub-folder ,etc.) even if there are no batch files
  • Can't be easily delete specific scheduling if you have several
  • We can't easily see which batch have been scheduled and when
Regarding the 2nd bullet point: in current version, nothing prevents you to execute an open batch if there are no batch files. As a result, you end with several batch entries in the process details page:

Luckily, there are different workarounds for limitations above. I will cover some of them in future posts.

Some Tips & Tricks
Based on experience, I think this information can be useful:
  • You can group multiple open batches in a master batch (type Batch) although all open batches must be configured for the same target application
  • I would not use parallel open batches in current version ( There are some known issues
  • When customizing your batch process, bear in mind that batch files for same DLR, will overwrite data and some audit information in the FDMEE database. For example, if you have one DLR where multiple files for different entities are processed
  • Avoid using "_" as file delimiter. Reason is that underscore is commonly used in DLR names
  • Try to force source systems generating batch files to use batch file name convention. Otherwise, you can execute a custom script to rename files before batch is processed
  • Add email notification system to your batch process. This is not an out of the box functionality in FDMEE but Admins and end-users will appreciate it. Who doesn't want something like this?

That's all folks.

I hope you enjoyed reading this post and have a clearer idea about open batches. And remember, the best of way of learning is to use them :-)

Friday, November 11, 2016

Code Snippet: script code based on EPM environment

We may have some scripts which code differs from one environment to another. Maintaining different code for same script is a risky task.
What about having one single script which you can migrate into the other environments with no risks?
How can I make my code dynamic based on the EPM environment I'm working with?
Let's have a look!

Getting EPM environment based on FDMEE Server hostname

 Snippet:       Get environment running the script based on the FDMEE Server
 Author:        Francisco Amores
 Date:          11/11/2016
 Notes:         This snippet can be pasted in any event/custom/import script.
                Output will be logged in the FDMEE process log 
                This script uses socket python module
 Instructions:  Set log level (global or application settings) to 5 
                To add new servers, include an entry in the dictionary.
                You can get hostnames by executing command "hostname"
                from the command line in the FDMEE server(s).
 Hints:         Use this snippet to avoid having different script codes
                across environments. The right code will be executed
                at runtime based on the FDMEE server running the process
 FDMEE Version: and later

# import socket module
import socket

# Dictionaty for FDMEE Servers
# These are sample servers and should be replaced by existing ones
dictFDMEEServers = { 
                     "FDMEE_SERVER_DEV" : "DEV",
                     "FDMEE_SERVER_PROD1": "PROD",
                     "FDMEE_SERVER_PROD2": "PROD"

# get hostname
hostName = socket.gethostname()

# Assign envFDMEE based on FDMEE Server
    envFDMEE = dictFDMEEServers[hostName]
except KeyError, err:
    errMsg = "FDMEE Server does %s not exist in the dictionary dictFDMEEServers" % hostName
    raise RuntimeError(errMsg)
# Debug
fdmAPI.logDebug("FDMEE Server: %s (%s)" % (hostName, envFDMEE))

# ****************************************************************
# Specific Code for Development
# ****************************************************************
if envFDMEE == "DEV":
    # Debug
    fdmAPI.logDebug("Code for %s" % envFDMEE)
    # your code here

# ****************************************************************
# Specific Code for Production
# ****************************************************************
if envFDMEE == "PROD":
    # Debug
    fdmAPI.logDebug("Code for %s" % envFDMEE)
    # your code here 

# ****************************************************************
# Specific Code for Environment XXX
# ****************************************************************
if envFDMEE == "XXX":
    # Debug
    fdmAPI.logDebug("Code for %s" % envFDMEE)
    # your code here  

# ****************************************************************
# Code for all environments
# ****************************************************************

# your code here

Code snippets for FDMEE can be downloaded from GitHub.

Code Snippet: logging fdmContext values

Object fdmContext is the dictionary FDMEE uses to store context information such as Location Name, Import Format Name, Process Id, etc. Most of the values stored in fdmContext are specific for the current process being executed. Others are global to all processes.

If you want to learn more about dictionaries in Python/Jython, please visit this tutorial.

Logging all fdmContext properties (loop sorted dictionary)

Not sure which are the values you can get from fdmContext at any event script?
Simply include this snippet into your event script and see results in the FDMEE process log.

 Snippet:       Log content of fdmContext object
 Author:        Francisco Amores
 Date:          11/11/2016
 Notes:         This snippet can be pasted in any event script.
                Content of fdmContext object will be logged in the
                FDMEE process log (...\outbox\logs\)
 Instructions:  Set log level (global or application settings) to 5 
 Hints:         Use this snippet to see different context values
                at any workflow step.
 FDMEE Version: and later

# initialize 
logLines = ""

# loop all fdmContext keys (sorted)
for key in sorted(fdmContext):
# get key value
value = fdmContext[key]
# build log line (property name: property value)
logLines += "%s: %s\n" % (key, value)

# write line to log (Debug)
if logLines:
fdmAPI.logDebug("Content of fdmContext: \n%s" % logLines)
fdmAPI.logWarn("Nothing to log from fdmContext")

Logging all fdmContext properties (list comprehension)

First, make it working. Then, make it Jythonic. Try to simplify your code and make it more readable.
We can also use list comprehension to build lists dynamically:

 Snippet:       Log content of fdmContext object
 Author:        Francisco Amores
 Date:          11/11/2016
 Notes:         This snippet can be pasted in any event script.
                Content of fdmContext object will be logged in the
                FDMEE process log (...\outbox\logs\)
                This script uses list comprehension builder
 Instructions:  Set log level (global or application settings) to 5 
 Hints:         Use this snippet to see different context values
                at any workflow step.
 FDMEE Version: and later

# initialize 
logLines = ""

# List comprehension
# Define list with pairs "Key: Value" from sorted fdmContext
list = ["%s: %s" % (key, fdmContext[key]) for key in sorted(fdmContext)]
# List is then joined with "\n" character (newline)
logLines = "\n".join(list)

# write line to log (Debug)
if logLines:
fdmAPI.logDebug("Content of fdmContext: \n%s" % logLines)
fdmAPI.logWarn("Nothing to log from fdmContext")

Code snippets for FDMEE can be downloaded from GitHub.

Thursday, October 27, 2016

Universal Data Adapter - Extracting YTD balances (Source Filters)

Hi folks,

Life keeps me busy and it's getting hard to find some time to post which I feel sorry about.

Some time ago, I delivered an ODTUG Webinar about extending source filter functionality for Universal Data Adapter (UDA). As a "new" feature in FDMEE, functionality is a bit limited if you compare with what you can do when having a script to extract data from your source Table/View. For this reason, customization can be very helpful.

I wanted to have some more use cases before I shared with you a good one. As you can imagine, I finally got it :-)

The main goal if this post is to show how customizing source filters can help you in many different ways. Not sure about that? I hope you get a clear idea after reading it.

As this post is about UDA, I would recommend you to read my previous posts in case you haven't:
Use Case
There are many ways of extracting data depending on very different requirements but we all agree that extracting Trial Balance Year-to-Date (YTD) balances is the most common one.
What about if your reporting and analysis process need more detail that TB figures have? we probably need to jump into Transactional data concept. I know, that means bigger volumes of data, but if you need it, you have to extract it, and it's our job to build a robust, maintainable, flexible, and well performing solution for that.

I don't want to dive deeper into functional details so I will show you the high level technical solution:
As you can see above, we have to extract data from two different Oracle regular views:
  • View with GL data
  • View with SALES data
These views have been created on top of IQMS ERP database. If you google it, you already know we are talking about an ERP solution for Manufacturing. Therefore, extracting huge number of transactional records. Time to put FDMEE on the racing track!

UDA is the source adapter used to perform this data extract. It provides a seamless integration between source database tables and views. And it's FDMEE's RDBMS independent! IT team was impressed about being able to integrate the source IQMS database (Oracle 11g) with FDMEE database (MSSQL 2012) in such an easy way. And the good thing? UDA uses Oracle Data Integrator (ODI) which can extract and execute filters on the source DB, than means, take advantage of the source DB engine and transfer lower volume of records between the two systems.

We will be filtering data based on the Location selected by the user in the FDMEE POV (Column COMPANY). Besides, we want to filter data by transaction date (Column JOURNAL_DATE), and here is where a decision has to be made.

Extracting YTD transactions
I guess you are all familiar with YTD concept. I will not make difference between Balance Sheet, Income Statement or Sales.  I will keep it simple and show how the functional requirement "Extract YTD figures" turns out into the technical requirement.

Technically, extracting YTD transaction figures means filtering data between a range of transaction dates:
How can we implement this filter using UDA? How can we map one target period with multiple source periods? basically, in two ways (maybe more).
  • Source Period Mappings (Standard)
  • Source Filters (Needs some customization)
Which one is better? what do we mean with "better"? which one would you choose? 

Source Period Mappings
Since ERP integrations and Source Adapters were introduced in the product, it makes sense that, in the same way we create Global/Application period mappings, we can create source period mappings.
Basically we map each Point-Of-View Period with source period(s) we want to extract data from. We can even create multiple calendars for each source. 

For the UDA, we have to specify GL Year, GL Period and GL Period Number
How does FDMEE know which are the columns in our source View/Table used to filter periods? Easy, when we setup the UDA, we will tell FDMEE which columns are classified as Year, Period and Period Number. This is only needed if we are going to use source period mapping table to filter data. Otherwise, you don't have to classify the columns and, of course, you don't have to define the period mappings.
Once this configuration is done, when we configure the Data Load Rules (DLRs), we can select if we want to use the source period mapping to filter data (Period Mapping = Explicit) and which calendar, or if we want to extract all data from source (Period Mapping Type= None):
So now that we know the basics, how would we define the Source Period Mapping table to extract YTD balances from the Oracle Views?
We would need to define the mapping table as:
  • Source Period Mappings for POV Period Jan-2016
    • 01/01/2016, 01/02/2016,...,01/31/2016
  • Source Period Mappings for POV Period Feb-2016
    • 01/01/2016, 01/02/2016,...,01/31/2016
    • 02/01/2016, 02/02/2016,...,02/29/2016
  • ...
Let's be a bit more creative...Do we really need to define source period mappings at date level? why don't we define them at month level?
For example, we could define new column in the View called PERIOD_NAME. Then we can easily define period mappings as:
  • Source Period Mappings for POV Period Jan-2016
    • Jan-2016 (value of PERIOD_NAME column)
  • Source Period Mappings for POV Period Feb-2016
    • Jan-2016 (value of PERIOD_NAME column)
    • Feb-2016 (value of PERIOD_NAME column)
  • ...
This seems to be now easier to maintain but, is maintaining the period mapping table our decision making driver? In this case, I would say not. Why? because we can easily create the entire mapping table in minutes by importing it with Excel Interface.
Then, what is really important here? 
Let's see how FDMEE would import data if we use the source period mapping table (Ex: Dec-2016)
  1. Import data for Jan-2016
  2. Import data for Feb-2016
  3. Import data for Mar-2016
  4. ...
  5. Import data for Dec-2016
  6. Transform all data
  7. Validate all data
  8. Export all data
  9. Load all data
Each import step above will execute a SQL query against the source View, that means, extracting, filtering, and loading into TDATASEG_T.
Could we simplify it to just one import step which performs the extract for YTD figures in one shot?
Obviously, answer is YES :-)
How? keep reading...

Source Filter for UDA
If we think about a SQL query to get data filtered from the source view, we would have something like this:
We can identify two columns which will be filtered: COMPANY and JOURNAL_DATE.
First thing we did was to define an additional column in the View: FILTER_DATE
Initially, the reason for creating this column was to capture some special cases where Journals were posted in Date A but they had to be imported in Period B which did not match period where Journal was posted. Then, we reused that column in order to define our filter:                       
Once the column was created, we could create the source filters for our interface.

Source filters are defined in the source adapter as Parameters:
Values for these parameters are set in the Data Load Rule page:
It's very important that for using this approach, option Period Mapping Type is set to None. This option basically means that we will extract all data from the source but filtered by our source parameters.
In this implementation, end-users don't have access to Data Load Rule page so they can't change the values. Obviously, p_company value is static for each DLR, what about p_date? its value needs to be dynamic as it depends on the POV period user selects in Data Load Workbench.

How can we make this parameter value dynamic? Can FDMEE do that?
Let's see first how it works internally.
  1. DLR execution is launched
  2. FDMEE inserts all parameter values (not only source adapter but target options, etc.) into table AIF_BAL_RULE_LOAD_PARAMS
  3. DLR is processed and values are take from that table in the different actions
So basically, FDMEE stores all parameter values before data is imported for each process id (LOADID):
Before data is imported? yes. And, can we perform any actions before data is loaded? yes. How? Where? Event Scripts! That's the answer. The BefImport event script is executed just before data is imported so it looks like a perfect place where to make our parameter value dynamic.
In case you are not familiar with Event Scripts, you can find them in Script Editor (which is not the best place where to code BTW):
Updating the Parameter Value
In order to facilitate date filtering we decided to have dates as strings with format YYYYMMDD.
Why YYYYMMDD? Because it was quite close to the format PERIODKEY was stored (YYYY-MM-DD)
Start Date: as we are extracting YTD, the start date of the range will be always YYYY0101 (1st of Jan) where YYYY is the year of the POV period being processed.
End Date: this will be our POV Period Key which has been configured in the Global Period Mapping Table as the last day of each month.

Therefore, our parameter value should look like: BETWEEN 'YYYY0101' AND 'YYYYMMDD'
It's clear then, that PERIODKEY will be our driver to get the two values we need. This value is stored in the context object fdmContext:
Updating the table with parameter values is not difficult, you just need some SQL notions:
And the good thing? this is totally transparent for the end-user as the parameter value will be updated on the fly based on the period he/she selects in the Point-Of-View bar.

Process Details showing "Importing Data for <NONE>"
If you execute a DLR with period mapping type option set to None, then you see something like this in the Process Details page (not the error! just the <NONE> in the process step):
I don't like this message as you don't know the period being processed. How can we change that?
This information is stored in table AIF_PROCESS_DETAILS just after data is imported into FDMEE. Again back to similar question we had before: How? and Where?
How? the process step message takes the value displayed from column ENTITY_NAME. We just need to update this value for current process id. Import step can be identified with value PROCESS_BAL_IMP in column ENTITY_TYPE. Period description is stored in fdmContext["PERIODNAME"].
Where? as the line is inserted after data is imported, event script AftImport seems to be the right place.
The following code can make that happen:
And process details will show now:
Much better now, isn't it?

We showed two different approaches to extract data from multiple source periods (due to YTD requirement in our case).

The approach of defining source period mappings table has an impact on the number of data extracts executed by FDMEE, one for each source period. Besides, it requires defining the table with multiple mapping entries.
On the other hand, by using source filter for date column, FDMEE executes a single import step using a BETWEEN filter on the WHERE clause of the SQL query executed on the source. IMHO, a better solution. But again, it all depends on your requirements.

That's all for today. I hope you found this helpful. As always, I suggest you don't focus on this specific solution and try to take main idea to your specific requirements.
Sometimes, with a bit of creativeness and customization,  we can get a nice, maintainable and flexible solution to meet our requirements regardless their complexity.


Monday, July 25, 2016

Using On-Prem Planning REST API to Import Metadata

I finally found some time to blog after KScope16 :-)

KScope is always an inspiring event for me. I actually enjoy attending other sessions than FDMEE ones. And this is what I did!

It's being some time since I was curious about REST (this is what I wish to do but not possible with little one running all the time and trying to destroy whatever he finds...)
For that reason I tried to attend REST sessions at KScope, even in the DB track!

I have to highlight the one delivered by Jake Turrell. A very interesting session about using REST API in Groovy to import metadata/data from external sources into Planning. That was a good session, clear, concise and very well presented.

This post is not about introducing REST which is I think is very well covered in the other blogs. This post is just to provide some idea about how we can invoke REST API methods from FDMEE in order to communicate with other parties either as a source or as a target.

However, I found this descriptive image answering to someone asking about REST Vs. SOAP. I could not skip this in today's post :-)
The Requirement
Going back to what gave me the idea for this post, all started some time ago when I had a chat with GJG about a requirement where REST could be a good solution to build a custom process that allowed importing Planning metadata from FDMEE.

Which was the requirement? We had a customer using FDM Classic to load new accounts and entities into Planning before data was loaded. In that way they didn't get errors when target members didn't exist in the application. In other words, no partial loads in Planning.

Let's have a look at old solution BTW:
As you can see above, FDM Classic was launching an ODI scenario which was actually performing some operations and finally loading new metadata into Planning. We could have also used OLU (Outline Load Utility) but there were many security restrictions and they liked ODI as other processes were also in place (if you are wondering if they had full ODI license, yes they did)

So time for migration, time for FDMEE. Keep same solution?
  • High level? YES
  • Technical solution? NO
Why keeping high level solution? seamless metadata import, robust error capturing, transparent for end-user, etc.
End-users are interested only in seeing the gold fishes along workflow. My data is imported, my data is mapped, my data is valid, my data is in Planning. That's all.

Looking at the technical solution, do we really need ODI to load metadata from a flat file? Even if they had full ODI license, we agreed to remove that piece of the cake. ODI was not being used as expected so no reason to keep it.

Was OLU a good replacement? Maybe, maybe not. Couple of points that didn't make me happy at all:
  • FDMEE and Planning servers were different so Planning utilities had to be executed remotely or installed in the FDMEE server.
  • Not a clean solution in terms of error capturing so hard to integrate with FDMEE as I wished
John was showing in his blog how to use REST and how REST API is also available for on-prem Planning. It was not clear at that moment if that was supported but we got confirmation from Oracle so green light! Thanks John for laying the first stone.

Decision made, time to play!

- Francisco: Hello REST, my name is Francisco and I want to use you from FDMEE. 
- REST: Hello Francisco, we are going to have fun for the next couple of days. 

REST API for On-Prem Planning
I'm not going to reinvent the wheel so visiting John's post about Planing REST API is a must to understand this post.
Already read it? Then you are probably now familiar with Planning REST API and how to use it. I was.

As always, I recommend to start from high level approach and keep diving into lowest levels until you finally have the technical solution. And that is what I did, start from the high level solution design:
The diagram above does not show all details and look simple at this stage.
When you start developing the orange boxes then several questions comes up:

How do I invoke a REST API method to execute a Planning job?
How do I check Job status?
How do I read response from REST API?
How do I parse the response to get job id, result, etc.?

If we focus on today's topic, we could add some light by defining the process as:

1) Invoke REST API method to run a Planning job to import metadata
2) Read and parse JSON response to get job details
3) Invoke REST API method to check status of job 1) (until it is completed)
4) If job succeeded, invoke REST API method to run a Planning job to refresh the cubes
5) Read and parse JSON response to get job details
6) Invoke REST API method to check status of job 4) (until it is completed)

Basically a game of invoking & reading response. So let's play that game.

Planning Jobs in the SUI
New to the SUI (Simplified User Interface)? Have a look, this is how cloud services interface looks like. The good thing is that SUI is also available in on-prem (Planning so far). You can feel like in the Cloud :-)
It took me some time to get used to how things are done now but once you get it, it's easy to remember.

I just need to create one job for my testing. A job to import new entities into Planning from a CSV file. The idea is to have a metadata file in the Planning Inbox/Outbox folder so the job can import it.
Metadata file is simple:
Playing around with REST Web Client
Now that we have created our job it's time to play with REST. This is essential to get familiar with it. I wouldn't start coding if I don't understand how it works.

For that purpose, I'm also using the RESTClient for Firefox, easy to use and very helpful.
Invoking REST API method to run a Job
API methods for PBCS accepts JSON body with parameters but on-prem does not so far. You can easily create the following request and see the error returned;
A workaround is to pass the body as part of the POST URL:
As we can see, the JSON response contains the job status and job id (132). This last value will be used to invoke the GET method that returns the job status:
The new base entity FDMEE_Entity_REST has been created in Planning: 

Using FDMEE as the REST Client
John showed how to invoke REST API from PowerShell scripts, Jake did the same from Groovy and now it's time for Jython.

I first need to highlight that we are restricted to Jython 2.5.1 in current FDMEE release ( Luckily, from Python 2.6 onward we had several enhancements. As you can imagine some of them were related to HTTP requests and JSON.

Unluckily we have to stick to 2.5.1 so we can't use any of these enhancements. The sample code below shows specific method using native urllib2 module and Jyson, a third party JSON codec for Jython 2.5.1.

Steps to run a Planning Job
1) Import necessary libraries/modules
2) Build URL
3) Build Body (parameters as a encoded dictionary)
4) Add Basic Authentication Header
5) Invoke REST API Method (open URL)
6) Read and parse the JSON response
 7) Invoke REST API method to check Job status
I have logged all output so you can see details like JSON Response.
If you want to see more details about job execution you can navigate to Import and Export Status in SUI:
Refreshing the cube follows similar approach so I let you play with that.

This is all for today. I hope you found this post useful and got the basis to build your own solution.
I must say that with FDMEE now providing Hybrid integration, REST is a very good approach to make both Cloud and On-prem interact and will help you to meet most of your requirements.

You just need to be creative, REST will do the rest.

Take care!