Migrating an On-Premises Essbase Cube to the Cloud

With Oracle Analytics Cloud (OAC) having been released last Friday (luck of the Irish!), I made it a priority to examine the level of difficulty (or ease) to migrate an on-premises (OP) Essbase cubes to the cloud. Overall the process is simple, but there are a few gotchas and caveats I found in the process.

And since I post my failures on my blog, you will see what I tried that failed so you won’t make the same mistakes I did!

Below is my OP Essbase cube. It is a fairly standard workforce planning/analysis cube that I built using the US’s Office of Personnel Management (OPM) structure for General Schedules (the pay and locality pay matrix for the different jobs in the US government). This is a BSO cube.

Because I truly wanted to test the export/import process, I added Typed Measures to my OP cube to see if they came over. Here, you can see the different localities in the GS schedule.

To show I don’t have just one or two rows of data, here are the cube statistics. There’s a chunk of data in there…comparable to some cubes I’ve built in the past.

I show the following screen to show the outline structure a bit. In each period block, I have the base level rolling up through Department to “GS Positions”. I wanted to highlight that the Typed Measures for Locality show at the base level, or Detroit in this example.

I’ve also added the option to enter and calculate pay and locality pay percentage increases for the different Budget versions.

As you can see, it’s not a wimpy cube to export, data included. Using the tool I wrote about last Friday that exports Essbase cubes, “dbxtool”, I enter the details required to connect to Essbase and download the desired cube, USG_WF.Emp_Dets.

I get a visual cue from the tool that my cube is being exported (so much better than a simple “Success”, IMHO).

When I go to the export’s folder, I see I have 2 items. I was expecting just one – the Excel file – so I want to investigate. The zip file contains my data because the data set is too large for Excel. I understand that reasoning.

When I open the Excel file, I see many of the cube details laid out for me in “Cube Designer” format (notice this is also the tab that opens when I open this file). I see my application name, database name, and dimensions listed.

On the Cube.Settings tab, I see the details about my cube. Everything looks correct to me.

The Cube.Generations tab is empty.

Next, the different dimensions are listed out on their own tab. I see the dimension name and hierarchy under “Members”. In the Definitions section, I see that the file and rule names have been provided for me.

Since I expected the dbxtool to export the OP cube in ready format for cloud load, I decide to load it to OAC as is. I need to set my Essbase cloud connection, and I did that by clicking “Connections”.

I entered the URL for Essbase.

Next, I clicked “Transform Data” to load the workbook.

You can have the system choose the cube name, or you can modify it yourself. I did not make any modifications and clicked “Run”.

The tool asks if I want to build the cube…yes, Run.

Once the job has been started, you can choose to go to Job Viewer to see the progress.

I see my cube load is running. I can click “Refresh” to see the updates.

Also, I can click “Details” to see details behind the job.

Still in progress… If there are any errors, they will spit out to a text file that you view in this screen.

So, my build errored out.

I get an error that Headcount didn’t load (or anything else, for that matter). That’s odd.

So, I try the same process except choosing to make Modifcations. …Except I don’t really have any options to change.

The second tab… The Application Name and “USG_WF” name aren’t generations in the cube… Hmmm. Well, I have problems.

Next, I try to import the file online in Essbase.

I see my cube name and dimensions are listed…great. I choose not to Load Data at this point. …There wasn’t data on my data tab anyways. More on that later…

I see I’m erroring out again. Argh.

When I clicked “Export to Excel”, I really hoped to get more details about the errors, but I just got a regurgitation of what was on the screen.

However, I see my cube has been created. Wait, what?

When I click on the database and choose “Outline”…

I am brought to this screen. Notice the “Statistics” column shows no children (C), descendants (D), generations (G), or levels (L). No good.

Okay, 3rd attempt at loading the metadata. In Cube Designer, I choose “Build Cube”.

I get a warning this time.

Okay, so it created. That’s a step forward.

However, when I look at some of the error files, I see the same error as before and back to square one.

Out of frustration, I go to the “Templates” section in Essbase to download the workbook for Sample.Basic to see what I have wrong in my workbook.

I decided that I’m going to copy my dbxtool details to the Sample.Basic one.

Here are my details added to the Essbase.Cube tab.

Following Sample.Basic’s format, I added generation names for Department and Periods.

AHHHHHH. I see my issue. Using the dbxtool, I do NOT get the column names needed for my metadata. See below in light blue where it says “PARENT”, “CHILD”, etc. Yep, that’s it. And I knew it. So I now go through the process of adding the column names to my extract.

YES. FINALLY. SUCCESS. As you can see, it took me an hour’s worth of making mistakes to get it correct. But, I found my error.

The details behind the “Success” message.

I now have my new cube! …And I deleted the old cube that had no children.

Now when I go to the “Outline” tab, I have “Statistics”!

Just like OP Essbase, you must lock the outline to edit it. To verify my formulas came over, I want to test one out.

Clicking on the member formula shows it correctly. Perfect.

It even validates. Sweet.

Now I want to load my calc scripts. I only have 3 main ones, so it’s as simple as copying and pasting the script from OP to Cloud.

Under the “Scripts” tab, I can create a new script.

When I click “Validate”, I get confirmation the script is valid.

Saved off and ready for use.

If I need to add it later, I can do so on this same screen.

I have added my 2 other scripts to the repository.

To run the scripts manually, I click on “Jobs”. From here I click “New Job”.

I choose “Calc Execution” and the script I want to run. Then “Execute”.

The calcs haven’t really done anything yet because there is no data in the cube.

Okay. Now we need to talk about the data. Figuring this out really frustrated me. I couldn’t figure out how to load my data into the cube since it was not exported to the Excel workbook. I tried copying and pasting the data from the zip file, it didn’t line up with the format needed.

Here is the zip file. Notice how the columns are the period dimension.

In the dbxtool export, the columns are the Measures dimension, and Periods are in a Dimension column.

I tried to use Excel to rearrange the data, but the data size is just too large. I thought about loading it to my Oracle database, but that was going to take a while too.

There was really 1 true tool that was above all others for this task…and I shudder to say it…Essbase Report Script. So, that’s what I did.

And it did its job perfectly.

I decided to test something out at this point… Since I knew my dimension build tabs were good in my workbook, I deleted the application from the Cloud. I wanted to try and rebuild it with the data in the data tab.

When I went through the process to load the cube using the “Import” option online, I chose to load the data as well. (Pay no attention to the warning given on the screen. I forgot to take screenshots the first time around…) Since I deleted my previous cube and calc scripts, I re-added my calcs after the cube was recreated.

And it worked!

To verify the data loaded completely, I want to run my 3 calc scripts to calc the data. In the jobs tab, I click “New Job”.

It will ask me which script…

And I can see it ran successfully.

This is what a success script looks like form the Details:

To test the calcs, I decided to compare numbers at the top for FY2018->Budget->Final.

Here are the Cloud numbers:

Here are the OP numbers:

Perfect! I just need to reset some of the consolidation settings for some members so that members don’t roll up where it doesn’t make sense (like Locality Percentage).

Some takeaways:

· The dbxtool does not export column headings that are necessary for importing a cube into the cloud. You will need to add these.

· If you have a lot of data, you may need to load data outside of the data tab in the workbook. Using BI Data Sync is a great option, especially if loading XLSX, CSV, or database data.

· Double check the work of the dbxtool! Just for sanity’s sake.

· There are more than one way to load a new/update a cube. Pick which one you like best!

· You can now use the cube like you would any OP cube with Smart View, BICS (OAC), and Data Visualization. I wrote a post last week on how to connect DV to Essbase data sources. The link is here.

· The Typed Measures (Locality) did not come over in the migration. They were never exported with the dbxtool, so I did lose that in the process.

If I had the workbook set up correctly for the dimensions and data, I probably could have done this process in about 30-45 minutes. Pretty quick, IMHO! Seeing the numbers tie on first try has given me confidence in the tool.

Happy cube migrating!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s