Integration Admin Options
  • 48 Minutes to read
  • Dark
    Light
  • PDF

Integration Admin Options

  • Dark
    Light
  • PDF

Article summary

Overview

This document describes how Dynamic Planning integrates with Planful Structured Planning, Consolidation, and Reporting (PCR) applications. This is the traditional method of integrating with PCR.

  • Dynamic Planning pulls data from the Structured Planning, Consolidation, and Reporting applications

  • Users can analyze and report on the data.

  • Users can drill through rollup data in Dynamic Planning to see the original data aggregated from Structured Planning, Consolidation, and Reporting.

  • Users can modify the data within Dynamic Planning and then write the data back to Structured Planning, Consolidation, and Reporting applications.

Dynamic Planning also offers the ability to connect directly to the Financial Reporting Area of PCR. This is a simple way to view PCR data in models on a read-only basis. For details on how to use this feature, see Using Direct Access to PCR Financial Reporting.

Definitions

  • Dynamic Planning : the Dynamic Planning product. Dynamic Planning connects to the Financial Reporting cube in Structured Planning, Consolidation, and Reporting only at this time. It does not connect to the Sales cubes.

  • Horizon : an internal name that provides a pathway to access Structured Planning, Consolidation, and Reporting applications for Metadata download.

  • Soap : an internal name that provides a pathway to access Structured Planning, Consolidation, and Reporting applications for Data download.

  • Tenant Code : this is the code for Structured Planning, Consolidation, and Reporting applications and represents the way you identify which application you want to download from. The Tenant Code is not necessarily the same as the Tenant Name for Structured Planning, Consolidation, and Reporting applications.

  • Source model : a model called HACPM_Financial. It connects to the specified Structured Planning, Consolidation, or Reporting application and stores the entire set of metadata that exists in that application. It is the conduit to get data into the Master/Target model.

  • Master model: a model with a name of your choice. It is also called the Target model as it is the target of the metadata and data to be copied from the Source model.

Model Design > Connecting to the Structured Planning, Consolidation, and Reporting Applications

Power user access is required for these steps.

Manage > Application Administration > Application Settings

Fill in the Username and Password under Horizon Uri: this is a login to the exposed Structured Planning, Consolidation, or Reporting application service with the purpose of allowing Metadata download.

Fill in the Username and Password under Soap Uri: this is a login to an exposed Structured Planning, Consolidation, or Reporting application service with the purpose of allowing Data download.

Horizon Uri, Soap Uri, and Tenant Code are protected fields. You must call Support in order to change these fields.

If performing only a Metadata download, Horizon Uri, Username, Password, and Tenant Code must be specified. If performing only a Data download, Soap Uri, Username, Password, and Tenant Code must be specified. Note that the Tenant Code is not necessarily the same as the 

Tenant Name for the Structured Planning, Consolidation, and Reporting applications.

Specify the following items if your model will exceed the defaults:

  • Master model keys: default 1,000,000. Maximum: 1 trillion. Best practice: 100 million. The number of Dimension intersections tagged as Key in Model > Setup for a Master model. Multiply the number of members in each Key dimension by each other to calculate this value. Note that if you have dimensions that will be formed only with one or more Fixed Member mappings, you need to assume that there will be one member parent added to the top of those dimensions.

  • Master model value block size: default 20,000. Maximum: 100,000. Best practice: 100,000 for a large model. The number of Dimension intersections tagged as Value in Model > Setup for a Master model. Multiply the number of members in each Value dimension by each other to calculate this value. Note that if you have dimensions that will be formed only with one or more Fixed Member mappings, you need to assume that there will be one member parent added to the top of those dimensions. A small model will have a block size of 20,000 or less, medium 50,000-60,000, and large 90,000-100,000.

  • Maximum models: default 20. Minimum: 1 Maximum: 20. The maximum number of Analytic, Master, and Dynamic models that you want to allow within an application.

2020model.png

Note:
The other options available on the Application Settings page are irrelevant to data integration with the Structured Planning, Consolidation, and Reporting applications.

Save. No name is specified, you are saving generic settings for connecting to Structured Planning, Consolidation, and Reporting application. If you later need to connect to a different application (Structured Planning, Consolidation, or Reporting) you need to change the settings here.

When you click Save, the passwords disappear but are still stored internally.

At this point, no models have been created yet in Dynamic Planning.

Model Design > Setting up the Source Model in Dynamic Planning

Power user access is required for these steps.

Manage > Application Administration > Metadata Download

This option has no screen associated with it. Whatever you were viewing before you selected Metadata Download stays on the screen. Only the menu bar changes to show that you selected Metadata Download and to give you one available command, Run.

Click Run. This downloads all dimensions and members from Structured Planning, Consolidation, and Reporting applications selected in the App Settings page under Tenant Code and stores them in a new model called HACPM_Financial. This is a special model that exists only for the purpose of being a Source model for a Target/Master model, and to allow write back to the Structured Planning, Consolidation, and Reporting applications or external data source. When HACPM_Financial is created via the Metadata download, it is automatically generated by the system. There is no need to click Generate Model.

At this point, a model of type Source has been created and generated, and it is called HACPM_Financial. It is populated with dimensions and members that it obtained from Structured Planning, Consolidation, and Reporting applications, but contains no data. Despite the fact that HACPM_Financial has been generated, it is not visible from the Analyze or Report tabs. There are no formulas, calculations, or scopes created with it.

Note:
If the Metadata Download fails, any metadata that previously existed in the HACPM_Financial model is removed.

Power or Contributor user access is required for the remaining steps.

Model > Setup

Select HACPM_Financial.

The type is Source.

The creation and modify dates are shown along with the status that it has been generated.

All of the Dimensions that exist in Structured Planning, Consolidation, and Reporting applications are listed. By default, all dimensions except for Measures are type Key. Measures is Value. These types are assigned because Dynamic Planning needs to have a type for each dimension in HACPM_Financial. They have no meaning on this page. You will set meaningful types when you define the dimensions in the Target/Master model.

20209o.png

Model > Dimension

The first dimension shown is Company (the first dimension listed on the Model > Setup page. You can scroll down and see all the members in the Company dimension, their parents, and rollup operators.

image1492zzzzz123234569012312345690122345678901237.png

You can select a different dimension from the Dimension drop-down to view its members.

Model Design > Validation Report for Duplicate Members

Once you have downloaded Metadata from Structured Planning, Consolidation, and Reporting applications, you should look to see how Dynamic Planning will handle duplicate member names. The Structured Planning, Consolidation, and Reporting applications allows the use of a Member Code within one hierarchy and the same Member Code within another hierarchy. For example, Member Code “Project Manager” could be used within a Company hierarchy of “IT” and also under a Company hierarchy of “R&D.”

Dynamic Planning requires that all the member codes be unique.

When it encounters duplicate member codes, it renames one by concatenating Member Code and Member ID with an underscore between them. See New Member Code below.

ModelingImagesDataIntegration-CoreFunctionalSpecDataIntegration-CoreFunctionalSpec7.png

Power or Contributor user access is required for these steps.

Model > Model Administration > Validation

Navigate to Model > Model Administration > Validation. In the example below, HACPM_Financial is selected from the Model list box. The report runs. See example below. The status shows Failed if Dynamic Planning had to modify any member names due to duplication.

If the user does not like the New Member Code, there is no way to change it in Dynamic Planning. They need to change the segments in Structured Planning, Consolidation, and Reporting applications to avoid the duplication.

image1492zzzzz123234569012312345690122345678901238.png

Model Design > Setting up the Target/Master Model using a Map

You need to be familiar with the high-level outline of the Source model or Structured Planning, Consolidation, or Reporting application and know what you want to use in your Target model before continuing this process. It will help to look at information about the metadata that was downloaded from Structured Planning, Consolidation, and Reporting applications.

Power user access is required for these steps.

Manage > Application Administration > Application Statistics

Navigate to Manage > Application Administration > Application Statistics. The Application Statistics page shows the number of models in your application, your software license user restrictions, user types, disk space consumption, and numbers of dimensions and members in each model.

Under Source Models, you will see HACPM_Financial, its disk usage and number of records.

Look further down the page to see more details about HACPM_Financial:

  • Dimensions: shows the categories of information that exist in Structured Planning, Consolidation, and Reporting applications and how they have been mapped to dimensions in Dynamic Planning.

  • Type: shows the dimension types in HACPM_Financial. Remember, they have no meaning here, so just ignore them.

  • Member Count: shows the maximum number of members that you have access to when defining your Target model.

  • Hierarchy Levels: shows the number of levels of parent/child relationships in the dimension.

image14962020por.png

Power or Contributor user access is required for the remaining steps.

Model Design > Target Model Design

Decide which dimensions from HACPM_Financial you want to use in your Target model, and then decide what type of dimensions they will be. Here are the two types:

  • Key: dimensions and members will change; generally more dimension members

  • Value: dimensions and members that don’t change much and that are always populated with data; generally fewer dimension members.

Tip!
Copy the list of dimensions, member count, and hierarchy levels to a separate document and mark which dimensions will be Key and which will be Value.

Ensure that your application has enough capacity to store the Target model you intend to define.

  • Multiply the Member Count for all Key dimensions together. This number is your minimum Master Model Keys value. For example, if you are expecting your Key dimensions to have membership of Account=80, Company=10, Department=60, Product=30, Project=40, then your Master Model Keys value must be equal to or greater than 57,600,000.

  • Multiply the Member Count for all Value dimensions together. This number is your minimum Master Model Value Block Size value. For example, if you are expecting your Value dimensions to have membership of Scenario=3, Time=15, Measures=2, then your Block Size must be equal to or greater than 90.

  • Go back to Manage > Application Administration > Application Settings and increase the values, if necessary, to Master Model Keys and Master Model Value Block Size to accommodate the capacity that you expect to have in your Target model. Save your changes.

Model > Setup

In Practice

  1. Under Model, select Setup. This clears the screen of information that you may have been viewing before.

  2. Now delete “New Model” and type in the name for the Target model.

  3. Set the Type to Master.

  4. Change Data Tracking is an optional feature that marks leaf-level data as dirty whenever it is changed, such as through user input, calculations, or data load. When the next full aggregation (Aggregation, None) is performed, only the rollups with dirty leaf-level data are aggregated. This makes full aggregations faster.

    Enabling Change Data Tracking is helpful in cases where users are updating one or two department’s data and need to run a full aggregation to see the data rolled up against all departments. Once Change Data Tracking is enabled, the application will track which data blocks are changing. When a full aggregation is run, instead of running the aggregation for the entire model, the system will run aggregation only for the blocks that changed (in this case, the departments that changed and the dependent rollup members), so the total aggregation process will take less time.

    There are steps to complete if you want to Enable Direct Access to PCR. Access the Using Direct Access (aka Direct Connect) to PCR Financial Reporting for more information.

  5. Next you define the dimensions that you want in your Target model in the Dimension and Type list.

    • List the names of the dimensions from the Source that you want to use. You can use the same names as Structured Planning, Consolidation, and Reporting applications or you can change the names. You will map one name to the other when you do the Mapping.

    • List the names of any new dimensions you want to use in your Target model.

    • Specify the type for each of the dimensions: Key or Value.

  1. Save. The Master model is then created, and the Created On and Created By fields are populated.
    At this point, you have a Source model that has been generated. And you have a Master/Target model that has been created at the highest level (dimensions only), but not generated. You cannot generate the Target model until members are defined.
Note:
If you are creating any new dimensions that are not copied from the Source, you now need to go to Map > Dimensions, specify all the members of those dimensions, and Save. (In a future release, the members you define will be saved after you generate the model. Previously only the DimensionFilter member was retained.)

Model > Map

Here you specify which members of the Source dimensions will be copied to the Target/Master. You need to be familiar with the detailed outline of the Source and know what you want to achieve in your Target outline before continuing this process. You do not have to enter each member name because you can use filters to designate which members to copy.

In Practice

  1. The Target Model is the name of the Master model you created in the previous step.

  2. The Name lets you give this mapping a name so you can re-use it.

  3. The Type indicates what should be copied from the source: the metadata, the data, or both. For first-time mapping, select Metadata or Both. The Data type is used for Write-back.

    Hints :

    • If you want to test your Target outline for effectiveness for the user first, select Metadata so you can experiment with the resulting outline. Note that Selecting Metadata/Leaf or Metadata/All will not require the Data Download step described below. You will only need to Generate the Model in order to move the metadata into the Target.

    • If you expect the data that will be copied to take a long time, you may want to copy just Metadata first, and then copy Both at a different time, such as overnight.

    • Eventually you will likely have more than one map defined for the model. But note that only one of those maps can include metadata as part of its definition (ie. type Metadata or Both).

  4. The Source Model is HACPM_Financial.

  5. Transfer = Leaf or All. Select Leaf. The All option is not applicable to Data Integration with Structured Planning, Consolidation, and Reporting applications. The Structured Planning, Consolidation, and Reporting applications store only leaf-level data, so there is no way to transfer parent data from Structured Planning, Consolidation, and Reporting applications. Parent data is created only within Dynamic Planning using Calculations.

    image1492zzzzz123234569012312345690122345678901239.png

Source to Target Map Table

Here you must list all the dimensions from the Source (HACPM_Financial) under Source Dimension, whether or not you intend to use them, with exceptions noted below.

Note:
The exceptions are BudgetEntity, AttributeHierarchy, and Attribute, which are dimensions in the Source but are not mandatory to be defined or used in the Target. If you choose to use BudgetEntity, simply include it in the map table as you would any other dimension but avoid using AllMembers in the Map table. AttributeHierarchy and Attribute are dimensions whose purpose and implementation are being changed in an upcoming release so that they are not separated out as dimensions. Omit AttributeHierarchy and Attribute from the Map table.

Hint : To get a list of all the Source dimensions, save your Map before you have specified any source dimensions, then go to Model > Setup, select HACPM_Financial, and then copy the dimension names to the Clipboard. Then open a new temporary workbook and copy the dimension names there as a holding place (remember to omit AttributeHierarchy and Attribute, and optionally omit BudgetEntity). Return to the original workbook, return to Model > Map, specify the Target Model and Map Name, then paste the dimensions from the temporary workbook.

Note:
Moving between Model > Setup and Model -> Map erases whatever is on the Clipboard thus the need for the temporary workbook.

ModelingImagesDataIntegration-CoreFunctionalSpecDataIntegration-CoreFunctionalSpec12.png

Optional : Next, list all the dimensions that are new in the Target model (ie. not copied from the Source) under Target Dimension. Set Source Dimension, Source Filter, and Source Value to None. Target Filter must be set to Dimension Filter and Target Value should be one of the members of the dimension.

ModelingImagesDataIntegration-CoreFunctionalSpecDataIntegration-CoreFunctionalSpec13.png

For Source Dimensions that you do not plan to use in your Target model, set Source Filter to Dimension Filter (to filter it out) and specify one member from the source dimension where the data will come from. Then specify None for the Target Dimension, Target Filter, and Target Value. If plan to read and write data back and forth between Structured Planning, Consolidation, and Reporting applications and the Dynamic Planning application, the member you choose from the source dimension must be a leaf member. If you select a parent member, you will not be able to write back data to that member. In the screenshot below, IC Segment Main is a parent member, so you cannot write back data to it later.

ModelingImagesDataIntegration-CoreFunctionalSpecDataIntegration-CoreFunctionalSpec14.png

In the first row of the table, there are drop-downs to assist you with selecting the best option for each mapping. In the second and following rows, there are no drop-downs but you can copy/paste the first row to the remaining rows.

Source Filter options

The Source Filter column defines both what metadata is copied into the Target and what data is copied.
image1492zzzzz1232345690123123456901223456789012340.png

When you Use this Source Filter…

This Metadata Is Copied

This Data Is Copied

AllMembers

All members of the dimension will be copied.

Leaf-level data of the dimension will be copied.

LevelAndAbove

All the members in the specified member level and their parent levels will be copied.

If the members in the level specified are leaf members, their data will be copied.

 

If the members in the level specified are parent levels, Dynamic Planning aggregates the leaf level data from the Source in order to populate the level specified.

MemberAndBelow

The member specified and its children will be copied. Any parent levels above the member in the dimension will not be copied into the Target. Note that if you had previously copied AllMembers for this dimension and generated the model, all other members will still exist in the dimension; they will not be removed. Rather, the member specified will be separated out into its own hierarchy but the other parts of the original hierarchy will still be there. To remove the excess hierarchy, go to Model > Setup, Clear Model, then go to Model > Dimension, and delete all members (except the root dimension member) and Save. Then the next time you use Generate Model, only the specified Member and below are copied.

If the member specified is a leaf member, its data will be copied. Since it has no children, no other data is copied.

 

If the member specified is a parent, grandparent, or more distant parent, only the data for lowest leaf levels are copied.

FixedMember

The member specified will be copied. It will be a child of the dimension member.

If the member specified is a leaf member, its data will be copied.

 

If the member specified is a parent, Dynamic Planning aggregates the leaf level data from the Source in order to populate the member specified.

DimensionFilter

This dimension will not be copied.

No data will be copied, but you still need to specify which one member affiliation should be used when copying. Since the source is a multidimensional database, all of its data items are affiliated with at least one member of each dimension.

LeafMembers

All leaf level members of the specified dimension will be copied.

Data will be copied for all leaf members.

None

In this case, Target Filter Type must be DimensionFilter.

There is no dimension in the Source associated with the Target dimension being defined.

No data is copied because the data is originating in the Target, not the Source.

Aggregation Operators

The following aggregation operators in Structured Planning, Consolidation, and Reporting applications are recognized and used:

  • + The value of the member is added to the aggregate value of the preceding sibling member(s).

  • - The value of the member is subtracted from the value of the preceding sibling member(s).

  • ~ The value of the member is ignored.

The following aggregation operators in Structured Planning, Consolidation, and Reporting applications are not recognized and are not used. Dynamic Planning assigns them the operator.

  • *

  • /

The following operator is available to you in the Dynamic Planning app. It does not come from Structured Planning, Consolidation, and Reporting applications:

  • ! The value does not include the child member value in the consolidation to its own parent and it is not included in the rollup to any other dimensions either.

Source Value options

The Source Value column defines the starting point for the Source Filter choice. 

If Source Filter is…

 

Then Source Value should be…

AllMembers

 

None

LevelAndAbove

 

A number indicating the level in the hierarchy. The top (dimension) level is level 1.

MemberAndBelow

 

A member name

FixedMember

 

A member name

DimensionFilter

 

A member name

LeafMembers

A parent member name.

None

 

None

Target Filter options

The Target Filter usually is the same as the Source Filter, as listed below.

If Source Filter is…

 

Then Target Filter should be…

AllMembers

 

AllMembers

LevelAndAbove

 

LevelAndAbove

MemberAndBelow

 

MemberAndBelow

FixedMember

 

FixedMember

DimensionFilter

 

None

LeafMembers

LeafMembers

None

 

DimensionFilter

Target Value options

The Target Value column defines the starting point for the Target Filter choice.

If Target Filter is…

 

Then Target Value should be…

AllMembers

 

None

LevelAndAbove

 

A number indicating the level in the hierarchy. The top (dimension) level is level 1.

MemberAndBelow

 

A member name

FixedMember

 

A member name

DimensionFilter

 

A member name

LeafMembers

A parent member name

None

 

None

When you save, the system checks the mapping for validity.

At this point, you still have a Source model that has been generated, you have a Target/Master model that has been created at the highest level (dimensions only), and you have defined the way that Dynamic Planning should populate the Target/Master with members, data, or both. The Target/Master model is not yet generated, does not contain members, and does not contain data.

Populating the Source Model > Data Download

Power user access is required for these steps.

Data Download is a Type option on the Calculation page (accessed via the Model task and Calculation subtask).

Select the Download Data (shown below) type, which will be executed when the Calculation is run. Dynamic Planning connects to Structured Planning, Consolidation, and Reporting applications, downloads data from Structured Planning, Consolidation, and Reporting applications, and fills in the data into the Source (HACPM_Financial).

Note:
Before data is downloaded from PCR, the system checks to see when the partitions in PCR were last processed. If there was any change in the PCR partitions since the last data download, the system downloads only the changes. This optimization speeds up the data download process.

2020oir.png

Optimization

Data downloaded from the Structured Planning, Consolidation, and Reporting application is time-consuming. We have optimized this process to keep track of the last date that data was downloaded, and to download only what has changed since the last download date. This optimization is specifically for those using Scenario partitions in the Structured Planning, Consolidation, and Reporting application. This is useful in cases where some data changes frequently (such as Budgeting data), while other data changes rarely (such as Actual data).

The Scenario partitions in Structured Planning, Consolidation, and Reporting have a "Last Process Date." Dynamic Planning uses this date to track what date needs to be downloaded.

This optimization is handled internally by Dynamic Planning and no configuration settings are required from the Power user.

Note:
If, for any reason, the data is not being downloaded, process the Scenario partition in the Structured Planning, Consolidation, and Reporting application, and then try downloading the data in Dynamic Planning.

Manage > Application Administration > Request Status

Selecting this option opens the Request Status page, which shows the results of processes. After running Data Download, a new entry will appear at the bottom of the table. The type of operation is shown along with the start date and time, followed by its status. If the status shown is a partial completion, such as a percentage complete, click Refresh after a few seconds to track when the operation is complete.

ModelingImagesDataIntegration-CoreFunctionalSpecAtthispointyoustillhave1.png

If the status shown is a Failure, the error message is listed.

Note:
“Maps for Model: HACPM_Financial not found” means that your map is defined as a Metadata download only. You can skip the Data Download step.

You can also use the Audit Log under the Manage, Application Administration menu.

At this point, you now have a Source model that has been generated and contains data, you have a Target/Master model that has been created at the highest level (dimensions only), and you have defined the way that Dynamic Planning should populate the Target/Master with members, data, or both. The Target/Master model is not yet generated, does not contain members, and does not contain data.

Clearing the Source Model

You use the HACPM_Financial Source model as an intermediary step to store financial data. You download data into HACPM_Financial and then use a map to copy the data into a Master model. After you have copied data into the Master model, you may not need it anymore. You can use the Clear Model menu command to clear data from HACPM_Financial.

For example, when you first set up your Master model, you may choose to download the last 5 years of Actuals data from Planful applications. After that, you will regularly download only this year's Actuals and Forecast data and next year's Budget data. The 5 years of older Actuals data is no longer needed in HACPM_Financial.

In Practice

  1. Select Manage > Application Administration > Application Statistics. In the image below, HACPM_Financial is a source model and contains data.

    newbb563.png

  2. Select Model > Setup, then select HACPM_Financial from the Model drop-down. Notice that HACPM_Financial is a Source model.

  3. Click Clear Model and then confirm that you want to clear the data.

    20209l.png

  4. After clearing the model, note that the model is still Generated. This is different than clearing a Master or Analytic model.

  5. Select Manage > Application Administration > Application Statistics. You see that HACPM_Financial now contains no data.

    bbu76d.png

Note:
Clear Model for HACPM_Financial works only with the menu command in SpotlightXL. It is not available within a calculation nor in the Web Model Manager.

Populating the Target Model

Power or Contributor user access is required for these steps.

Model > Setup.> Generate Model

In Practice

  1. Select the Target/Master model from the list box.

  2. Scroll down to the Maps section. You should see the name of your map, the name of the Source (HACPM_Financial), the name of the Target (the Master model), and the type of transfer.

    image1492zzzzz1232345690123123456901223456789012345.png

  3. Use Generate Model to move data from the Source to the Target.
    image1492zzzzz1232345690123123456901223456789012346.png

    When you click Generate Model, Dynamic Planning first checks to see if the values in your Application Settings page are adequate to accommodate the dimensions and members you will be downloading from the Source.

  • The value in Master Model keys must exceed the number resulting from the # of members in each Key dimension multiplied by each other. For example, if you are expecting your Key dimensions to have membership of Account=80, Company=10, Department=60, Product=30, Project=40, then your Master Model Keys value must be equal to or greater than 57,600,000.

  • The value in Master Model Value Block Size must exceed the number results from the # of members in each Value dimension multiplied by each other. For example, if you are expecting your Value dimensions to have membership of Scenario=3, Time=15, Measures=2, then your Block Size must be equal to or greater than 90.

If Master Model keys and Master Model Value Block Size are set to an adequate value, then model generation proceeds: members and/or data are moved from Source to Target/Master according to the options you set in the Map.

If Map Type Is

And Map Transfer Is

Data Download Does This

Generate Model Does This

Metadata

Leaf or All

Nothing.

Populates the Target with the outline of members in the dimensions, both leaf and parent, that are defined in the map. No data.

Data

(It is recommended to use the Data option only for Write-back maps.)

Leaf or All

Downloads data from Structured Planning, Consolidation, and Reporting applications to HACPM_Financial. Dynamic Planning reviews all defined maps and downloads a superset of data that covers all maps.

Populates the Target with the outline of members in the dimensions, both leaf and parent, that are defined in the map. No data is copied. To get data, you will need to explicitly run the Map using a Calculation. See “Running a Data Map Explicitly” below.

Both

Leaf or All

Downloads data from Structured Planning, Consolidation, and Reporting applications to HACPM_Financial. Dynamic Planning reviews all defined maps and downloads a superset of data that covers all maps.

Populates the Target with the outline of members in the dimensions, both leaf and parent, that are defined in the map. Populates only the lowest-level member intersections with data.

While the model is being generated, the status message at the bottom left of the Excel screen blinks. When complete, it says Ready. No messages are posted to the Request Status page.

At this point, you have a Source model that has been generated and contains data, and you have a Target/Master model that is generated, contains members, and may contain data depending on the Map type that you used.

Viewing the Default Calculation

Power or Contributor user access is required for these steps.

After generating the model, scroll down on the Setup page and you will see that Dynamic Planning has created one default calculation for you, with the same name as the model.

image1492zzzzz1232345690123123456901223456789012347.png

Model > Calculation

  • Select the name of your Target model from the Model list box.

  • Select that same name in the Name list box. This is the calculation that Dynamic Planning created and it has only one purpose, Aggregation.

  • Set Run in Background to Yes.

  • To be notified by email of the calculation’s successful completion, type your email address into the Success Notification and Failure Notification fields. When you run the calculation, you will receive email upon its completion.

  • Then click Save.

  • To aggregate the data, click Run.

  • Click Refresh to see the Last Run Status, showing the Aggregation started. When it is complete, clicking Refresh will show the successful completion and the number of seconds elapsed.

202087s.png

  • Now your Master/Target model is populated with both leaf-level and parent data, and it is ready to use in Views and Reports.

Model Statistics

After connecting to Structured Planning, Consolidation, and Reporting applications and creating both Source and Target models, you can view the statistics on the models.

Power user access is required for these steps.

Manage > Application Administration > Application Statistics

Under Source Models, you will see HACPM_Financial, its disk usage, dimension and member counts, and number of records.

Under Master/Analytics Models, you will see your Target model, its disk usage, dimension and member counts, and number of records.

Scroll down further under Models, and you will see new entries for the Target model you generated.

Maintaining and Refreshing the Model

Now you are ready to set up your environment for regular updates from Structured Planning, Consolidation, and Reporting applications. You will create two calculations to maintain the model:

  • Core App DW Download

  • Model Refresh

Power or Contributor user access is required for these steps.

Dynamic Planning: New Improved Aggregation Process

Effective aggregation techniques provide more information based on related data clusters, such as a company’s revenue or performance. For example, a store may want to look at the sales performance for different regions to aggregate the sales data based on a region.

Aggregation can be applied at any scale to summarize information and make conclusions based on data-rich findings. Data can also be aggregated by date, showing trends for years, quarters, months, etc. These aggregations could be placed in a hierarchy, where you can view the data trends for years, then see the data trends over months for each year.

Aggregation

An aggregation function is a mathematical computation involving a range of values that results in just a single value expressing the significance of the accumulated data it is derived from. Aggregate functions are often used to derive descriptive statistics. For example, an aggregation function groups together the values of multiple rows as input on certain criteria to form a single value of more significant meaning.

Improvement and Business Value

The Aggregation algorithm has been enhanced to process the data quickly and accurately, saving a lot of time working with large data models. Previously, the Aggregation algorithm used to take a considerable amount of time to process the data. For example, to process all data records in a model with 10 dimensions where 6 were Key Dimensions and 4 were Value Dimensions, the Aggregation algorithm previously took close to 18 minutes. With this release, the new Aggregation algorithm will process the same data records in just three and half minutes. This significantly reduces the time and greatly increases the performance of the process.

The processing time depends on the key and value dimensions, rollup operators, dimensions and levels within each dimension, and the total number of records in the model. The new algorithm is much more optimized in computing the rollup values in the model hence users will see that it's taking less time to complete the aggregation.

With the new Aggregation algorithm the overall time it takes to complete the aggregation will cut down to half. We have tested the new aggregation algorithm with various models and in all these tests the aggregation times have improved in some cases.

Performance Improvement Use Cases

Example 1: A model of size 3770.90 MB has 10 dimensions where 6 were Key Dimensions, and 4 were value dimensions. The following table shows some data of the data processed and the time consumed for aggregation.

Key Combinations Processed

23,82,42,81,600

Value Combinations Processed

2,41,280

Data Records Processed

9,54,443

Time Consumed

3 minutes 39 seconds

With the old aggregation algorithm, the time consumed for aggregating the same model was 18 minutes 22 seconds.

Example 2: A model of size 18300.4 MB has 9 dimensions where 5 were Key Dimensions, and 4 were value dimensions. The following table shows some data of the data processed and the time consumed for aggregation.

Key Combinations Processed

16,91,63,280

Value Combinations Processed

2,75,200

Data Records Processed

7,50,109

Time Consumed

12 minutes 1 sec

The time consumed for aggregating the same model was 36 minutes 55 seconds.

Example 3: A model of size 6698.2 MB has 6 dimensions where 3 were Key Dimensions, and 3 were value dimensions. The following table shows some data of the data processed and the time consumed for aggregation.

Key Combinations Processed

52136

Value Combinations Processed

8096

Data Records Processed

52136

Time Consumed

7 minutes 11 seconds

The time consumed for aggregating the same model was 2 hours 45 minutes 54 seconds.

Note:
The aggregation process supports 12 digits before the decimal and 6 digits after the decimal.

Enable Aggregation Performance Flag

You must contact the customer support team to enable the aggregation performance flag in your application.

Once the aggregation performance flag is enabled in your application, perform the following:

  • Log into your application.

  • Go to the Model Setup screen, and set the ‘Enable Aggregation Performance’ property to ‘Yes.’

Note:
This property is not applicable for direct connect models.

This property can be enabled even for models with Change Data tracking enabled; that is, the aggregation performance improvement can be enabled irrespective of whether the model has Change Data tracking or not.

If you have multiple models in the application and need to enable aggregation performance for all models, one option is to manually enable it from the Model setup screen. Another option is to request the support team to enable it from the backend.

Once you enable the flag in the Model screen, the Aggregation function that previously used the old algorithm will now run with the new algorithm. You can choose which model you want to enable for the same Tenant.

This change enhances the Aggregation Performance and significantly reduces the time previously taken for aggregation.

Note:
This feature is not available for Direct Access to PCR Model and Source type Model.

In Practice: To enable the Enable Performance Flag in SpotlightXL

  1. Open SpotlightXL, go to Model Setup screen.

  2. Now, navigate to the Enable Aggregation Performance property, and select Yes from the drop-down options.

Structured Planning, Consolidation, and Reporting Applications Download Script > Model > Calculations

Give the calculation a name: Core App DW Download. This script will refresh HACPM_Financial with the latest metadata and data from the DWMaster database in the Structured Planning, Consolidation, and Reporting applications. Remember, you used the Application Settings page to specify that Dynamic Planning should connect to the DWMaster database in Structured Planning, Consolidation, and Reporting applications.

In Practice

  1. Set Run in Background to Yes.

  2. Enter your email address in the Success Notification and Failure Notification fields.

  3. Under Type, add two commands using the drop-down: DownloadMetadata and DownloadData. Under Name, type None.

  4. Then Save the calculation.

    202076d.png

Model Refresh Script > Model > Calculation

Now create another calculation called Model Refresh. This script will clear all existing data from the model, load data from HACPM_Financial (via Generate Model), and run an aggregation.

In Practice

  1. Set Run in Background to Yes.

  2. Enter your email address in the Success Notification and Failure Notification fields.

  3. Under Type, add three commands using the drop-down: ClearModel, GenerateModel, and Aggregation. Under Name, type None.

  4. After creating the calculation, be sure to Save it.

    202096ds.png

Running a Data Map Explicitly

If you previously specified a Map of type Data (see “Generate Model ” above), you need to explicitly run the map in order to copy data into your model.

image1492zzzzz1232345690123123456901223456789012348.png

In Practice: Model > Calculation

  1. Give the calculation a name: Run Data Map. Then specify under Type that it should run a Map, and then provide the map name.

    2020874.png

  2. Then Save it.

  3. To run the map during model refresh, add the Map line to your Model Refresh calculation.

    20209m.png

Scheduling Download and Refresh Scripts

Power or Contributor user access is required for these steps.

When Dynamic Planning is used for analysis and reporting on data from Structured Planning, Consolidation, and Reporting applications, data needs to be refreshed in Dynamic Planning on a regular basis. You can schedule the Download and Refresh calculations to run regularly, such as overnight.

You can schedule daily, weekly or monthly runs.

In Practice: Model > Calculation

  1. Select the model and calculation that you want to schedule.

  2. Select the cell next to Schedule Pattern and then click Scheduler Manager.

    image1492zzzzz1232345690123123456901223456789012350.png

  3. Click to put checkmarks under Succeeds or Fails email next to the Power user’s username.

  4. Specify the time you want the calculation to start running in the At box.

  5. Specify how many times you want the calculation to be run by selecting Begin and End dates in the Interval section. By default, the dates also have times denoting the beginning of the Begin date and the end of the End date. In the example below, the calculation will run at 2AM daily from April 1 to May 1, 2016. That is 2AM according to the clock on the server, so Power users should be aware of the time zone of their server when setting start time. Then click Save.

    20209p.png

The Schedule Pattern cell is now populated with coding about the timing of the auto-calculation.

The dates and times are shown in UTC (Coordinated Universal Time) time format.

ModelingImagesDataIntegration-CoreFunctionalSpecAtthispointyoustillhave13.png

After changing the Schedule Pattern setting in the calculation, you must save the changes to the calculation also.

After setting the schedule for Structured Planning, Consolidation, and Reporting applications Download, Power users may want to then schedule the Model Refresh for an hour later. Or put all the commands from both calculations into one calculation and schedule it for an overnight run.

After the calculation is run, Last Run Time and Last Run Status are populated.

20209q.png

Dynamic Planning sends a notification email to state whether the calculation was successful or not.

You can also select Manage > Application Administration > Request Status to see the status of the calculation run.

ModelingImagesDataIntegration-CoreFunctionalSpecAtthispointyoustillhave17.png

Using Substitution Variables with the HACPM_Financial Model

You can define substitution variables on the HACPM_Financial source model. The variables can then be used in calculations and maps to more fully automate data loading from Planful applications.

In Practice: Define a Substitution Variable on HACPM_Financial

  1. Select Model > Model Administration > Substitution Variables.

  2. Select HACPM_Financial from the Model drop-down.

  3. Specify which dimension will have the variable from the Dimension drop-down.

  4. Provide the substitution variable name and value in the table.

    modsv.png

  5. Click Save.

In Practice: Using the Substitution Variable in a Calculation:

  1. Define variables in the Map.

    image1492zzzzz1232345690123123456901223456789012352.png

  2. Use the Period substitution variable, and the SourceMonth and TargetMonth variables, in a calculation that runs the map.

    image1492zzzzz1232345690123123456901223456789012353.png

  3. Each month, you simply set Period to the current month and then run the calculation.

In Practice: Optimizing Your Monthly Data Download:

  1. Define a scope on HACPM_Financial using variables that will limit the amount of data that needs to be downloaded.

    ModelingImagesDataIntegration-CoreFunctionalSpecScopeOnHACPM1.png

  2. Update your model refresh calculation to download only the data that is needed. In other words, download the data defined by the scope.
    image1492zzzzz1232345690123123456901223456789012354.png

  3. Select the Variables cell and click the Variables Manager button.

  4. Under HACPMScenario, select the Scenario you want to view

  5. Under HACPMTime, select the @Period@ substitution variable.

    ModelingImagesDataIntegration-CoreFunctionalSpecScopeInCalc21.png

  6. Click Update then Save.

  7. Each month, you simply set Period to the current month and then run the "Download and Refresh Monthly" calculation.

Using Drill-Through

End users may perform these steps.

Drill-Through provides a way for users to see all the data points that were aggregated to form a particular data cell. When data is copied from Structured Planning, Consolidation, and Reporting applications to HACPM_Financial during Data Download, transaction data is aggregated into analytic data. When the Master model is created from HACPM_Financial, a subset of that analytic data is made available for users to view.

Drill-Through is available from View and Reports. It works only on Master models created from HACPM_Financial or external data sources.

Drill-Through provides visibility into rolled up data in two ways:

  • From parents whose children are present in the model.

  • From parents whose children are absent in the model.

Parents with Children Present

When viewing rollup data in Dynamic Planning, if you want to see the child data items that were aggregated to the parent, sometimes you do not want to bother zooming in and out or pivoting to see those data items clearly. It is easy to simply select the parent member and then click Drill Through.

In the example below, Employee Benefits for Q1 totals 1100, and the child members for Jan, Feb, and Mar are visible so you can see how the data was derived.

ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility.png

If the child members were not visible, you could select the data cell containing 1100 and then click Drill Through instead to see the child members.

image1492zzzzz1232345690123123456901223456789012355.png

Dynamic Planning opens another tab called DrillThrough. The tab shows the source model, the number of data items that were aggregated, the intersection details, and the list of specific data items. In the example below, the parent data cell for Employee Benefits in Q1 2014 was the cell selected, and it is made up of 3 children that were rolled up to the parent.

image1492zzzzz1232345690123123456901223456789012356.png

Click Go Back to return to the view.

image1492zzzzz1232345690123123456901223456789012357.png

Parents with Children Absent

If the model map was designed to include parent members in the target model but not include the children, Drill-Through will show the data from the child members in HACPM_Financial.

In the example below, Sales & Marketing contains two children and you can see the child data.

ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility4.png
If that model is recreated to not include the individual Sales and Marketing departments, then you will have a parent data cell with absent children.
ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility5.png

When you click Drill Through, Dynamic Planning shows the data items that were aggregated to form the Sales & Marketing, Employee Benefits, Q1 value.

Warning:
Drill-Through always looks back at HACPM_Financial for the data items that were aggregated to create the data cell. These data items can appear to be erroneous if you have changed the data in your model. For example, if you change the data in child members in the Master model and recalculate, Drill-Through will not show the updated child members; it will still show the original data items from HACPM_Financial. Similarly, if you change data in parent members that are leaf members (parents with absent children) in the Master model, Drill-Through will still show the original data items that were aggregated to the parent when the model was generated.

Views and Reports

Now you are ready to use the Analyze and Report features to set up views and reports for your users.

Setting a Display Label

You can customize the way that members appear in views and reports using Display Labels. In Structured Planning, Consolidation, and Reporting applications, data is described with a Code and a Name. In Dynamic Planning, the Code becomes the member name by default.

ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility7.png

When designing the view or report, using the Properties option, you can use a Display Label instead. By default, Display Label uses the Structured Planning, Consolidation, and Reporting applications definition of Code – Name.

2020kjer.png

Here is the result:

ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility9.png

Model > Dimension

If you want to create a custom label for any member, go to Model > Dimension, and select the dimension you want to customize.

ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility10.png

Under Action, select Set Display Label.

ModelingImagesimage44.png

Then look for the member(s) that you want to change. For example, if you want to shorten the Display Label for “2000 – DWH Operations, Inc.” to say “2000 – DWH”, locate 2000 in the table and specify the Display Label.

ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility12.png

Then click Save.

When you next open your View (Analyze > Data), the custom Display Label appears.

image1492zzzzz1232345690123123456901223456789012359.png

Writing Back Data

Power user access is required for these steps.

Now that you have copied metadata and data from Structured Planning, Consolidation, and Reporting applications to the Dynamic Planning application, you have the option to use your new model read-only or read-write. There is not a live link between Structured Planning, Consolidation, and Reporting applications and the Target model, so any changes you make in the Dynamic Planning model remain only in that model unless you explicitly write them back to Structured Planning, Consolidation, and Reporting applications.

To set up your environment for write-back:

  • Structured Planning, Consolidation, and Reporting applications: set up access to the Web Service

  • Structured Planning, Consolidation, and Reporting applications: set up a Data Load Rule that will accept incoming data from Dynamic Planning

  • Dynamic Planning: set up the URL address pointing back to Structured Planning, Consolidation, and Reporting applications

  • Dynamic Planning: set up a Data Map to map dimensions and members back to Structured Planning, Consolidation, and Reporting applications

  • Dynamic Planning: set up a Calculation to run the Data Map

  • Structured Planning, Consolidation, and Reporting applications: run a report to validate that data has been changed

With this setup, you will be able to run the Calculation whenever you want to write data back to Structured Planning, Consolidation, and Reporting applications.

Structured Planning, Consolidation, and Reporting Applications: Configure Access to Web Services

In Structured Planning, Consolidation, and Reporting applications, login and select the Application that will receive data updates from Dynamic Planning.

In Practice: Maintenance > Configuration Tasks

  1. Select Web Service Access.

  2. On the Web Service Access page:

  3. Under API , select Enable.

  4. Look at the State Free Authentication URL and copy/paste it into Notepad or another document for safekeeping.

  5. In the Available Users box, specify the username that will have access to both the Structured Planning, Consolidation, and Reporting applications and the Dynamic Planning application. Then use the right arrow to move the username into the Email User(s) list so that you will receive an email at the conclusion of each write-back.

    202087ds.png

  6. Click Save and Complete.

Structured Planning, Consolidation, and Reporting Applications: Configure a Data Load Rule to Accept Data from Dynamic Planning

Be sure that you are still logged in. Now you will create a data load rule that will accept incoming data from Dynamic Planning.

In Practice: Maintenance > Data Load Rules

  1. Click New Data Load Rule. Then specify:

    • Rule name

    • Load Type: Web Services

    • Load Item: Data

    • Load Sub Item: GL Data

      2020jrks4.png

  2. Then click Next or Select Sample Input File.

    Specify the following items:

    • Row Containing Header: 0

    • Number of Rows to Skip: 0

    • Number of Source Fields: the number of segments in your Structured Planning, Consolidation, or Reporting applications (not your model) plus three (Fiscal Year, Fiscal Month, and Amount)

    • Do not change the other fields on this page.




    ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility19.png

  3. Click Next or Define Overall Rule Settings.

    Specify the following items:

    • Reporting: Common Currency

    • Credit/Debit: No for negative numbers, Yes for positive numbers

    • Data Type: MTD or YTD (whichever you chose in your initial download Map)

    • Calculate Trial Balance: No

    • Currency Conversion: either Yes or No

    • Scenario Type: All

    • Scenario: select the Scenario containing the changes that are to be written back

    • Frequency: Month

    • Time Mapping: Down Rows

    • Automatic Cube Refresh: Yes

    • Roll Annual Balances: No

    • Here is an example:




    ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility20.png

    Do not change the “Defaults” and “Clear Data Combination” sections.

  4. Click Next or Manipulate Input File.

    No changes on this page.

  5. Click Next or Define Data Mappings.

    From the list boxes, select the segments that correspond to the dimensions in your Map, in alphabetical order. You do not need to map Scenario, Reporting, Time, or Measures. The last 3 columns must be Fiscal Year, Fiscal Month, and Amount.

    ModelingImagesDataIntegration-CoreFunctionalSpecDrillThroughprovidesvisibility21.png

  6. Click Next or Load Data.

    Select the Scenario you chose in your Map or that data has been changed in.

  7. Click Finish.

Note:
The Data Load Rule that you create will remember the data items it modifies when you use it for write-back. A best practice is to use the same Data Load Rule for the write-back procedures from your model consistently. For example, if users are writing back 2017 Budget Data for a certain department and certain accounts, create and use one Data Load Rule for that data grouping and use a specific name, such as ‘2017BudgetSalesTravel.’ Then every time data is written back, it will overwrite data that was there before. If you use more than one Data Load Rule for writing the same data items, the data will aggregate instead of overwrite.

Dynamic Planning: Set Up the Address Back to the Structured Planning, Consolidation, and Reporting Applications

In Dynamic Planning, login and select the Application that will send data updates to Structured Planning, Consolidation, and Reporting applications.

Manage > Application Settings

In the Soap Uri box, paste the State Free URL that you previously copied in “Maintenance > Configuration Tasks .” Be sure the Username and Password information is up-to-date. (If you have previously entered the username and password, the password field will be blank.) Click Save.

Dynamic Planning: Mapping Data Back to Structured Planning, Consolidation, and Reporting Applications
When you brought data into Dynamic Planning from Structured Planning, Consolidation, and Reporting applications, you specified how the data which had been downloaded into HACPM_Financial should map to the Dynamic Planning target model (called Integration):

  • HACPM_Financial was the Source.

  • Integration was the Target.

Here is the map used in Model > Map before downloading the data.

ModelingImagesDataIntegration-CoreFunctionalSpecDataIntegration-CoreFunctionalSpec121.png

Now you need to map data from Dynamic Planning to Structured Planning, Consolidation, and Reporting applications so Source and Target are reversed.

  • Integration is now the Source.

  • HACPM_Financial is now the Target.

In Practice: Model > Map

  1. The Target Model is HACPM_Financial.

  2. The Name lets you give this mapping a name so you can re-use it.

  3. Type must be set to Data for Write-back.

  4. The Source is the name of the model that contains the changes you want to write back to Structured Planning, Consolidation, and Reporting applications.

  5. Transfer must be set to Leaf.

  6. The Write Back Id is the name of the Data Load Rule that you created in “Maintenance > Data Load Rules .” The Data Load Rule name is case-sensitive.

  7. Leave Access Token blank.

    image1492zzzzz1232345690123123456901223456789012360.png

Source to Target Map Table

Here you must list essentially the opposite of the map of you created prior to data download. The three columns that were on the left will now be on the right, and vice versa. The best practice, however, is to be specific in your map which data items you want to write back using Fixed Member, not to write back all members in all dimensions.

In the example below, instead of writing back all members of Account, Company, Department, Product, and Project, the map specifies only those members which have changed and that you want written back to Structured Planning, Consolidation, and Reporting applications. This way, you can control the flow of data between Structured Planning, Consolidation, and Reporting applications and Dynamic Planning.

Note:
Member names are case-sensitive.

ModelingImagesDataIntegration-CoreFunctionalSpecMCMappingDataBacktothe1.png

Remember that HACPM_Financial has two dimensions, Intercompany and Reporting, that were filtered out prior to data download, and you need to specify the member that data will be written back to. The member specified must be a leaf member. In the example above, Intercompany: Default and Reporting: G/L Data (CC) are leaf members.

For all months in 2014, data for product D006341, account 4010, company 2100, department SLSCA, and Actuals will be written back. Here is a View that shows these member intersections.

ModelingImagesDataIntegration-CoreFunctionalSpecMCMappingDataBacktothe2.png

Note:
If you decide to write back all members of a dimension instead of a fixed member, the dimensionality between the Source and Target dimensions must be the same. If your Source dimension is a subset of the Target dimension, then you can tell Dynamic Planning to write back only the members common to both dimensions by specifying Common under Match Criteria.

ModelingImagesDataIntegration-CoreFunctionalSpecMCMappingDataBacktothe3.png

Ability to Automatically Add Members When Writing Data Back From Dynamic Planning to Structured Planning, Consolidation, and Reporting

This feature is available upon request. Please contact Planful Support to enable.

Automatically add dimension members to hierarchies when writing back data from Dynamic Planning to Structured Planning, Reporting, and Consolidation. For example, you add new vendors in Dynamic Planning via an External Source Model. Then you send the data from Dynamic Planning to Structured Planning, Consolidation, and Reporting via a Model Map executed via a Calculation (in the Dynamic Planning module) and a Data Load Rule (in the Structured Planning module). Or, you add products in Dynamic Planning and write the additional products back to Structured Planning.

In Practice: Append Missing Dimension Members in a Model Map - Step 1

Enabling Write Back With Automatic Member Addition in Maps requires that you select Yes for the Append Missing Dimension Members option available only for Model Maps that are linked to the HACPM_Financial model. Follow the steps below.

  1. In SpotlightXL, navigate to Model > Map.

  2. Select the HACPM_Financial Model and the Map you created to write data back to Structure Planning. Verify the Map contains the Write Back Id for the Data Load Rule.

  3. For Append Missing Dimension Members , select Yes as shown below.

    spring201ab.png

  4. Save.

Note:
You can use a Writeback Map with attributes and lookups.

In Practice: Run the Calculation for the HACPM_Finanical Model - Step 2

The Calculation must include the Map where Append Missing Dimension Members is set to Yes.

  1. In SpotlightXL navigate to Model > Calculation.

  2. Select the HACPM_Financial Model and the Map you’ve created to write data back to Structured Planning as shown in the image below.

  3. Click Run.

    spring201abc.png

  4. Verify the Calculation completed successfully as shown below.

    spring1z.png

In Practice: Verifying the Dimension Members Were Appended in Structured Planning - Step 3

You will receive a detailed email indicating the segment updates to Structure Planning. Here is an example:

spring1cx.png

It is a Best Practice to login to Structured Planning to verify the additional segments/dimensions added to the hierarchy via Hierarchy Management or the Planning Control Panel. New members added to Dynamic Planning and written back to Structured Planning are added to the DI-AutoCreated rollup shown below.

spring1cds.png

In Practice: Adding Appended Dimension Members to a Dynamic Report - Step 4

  1. In Reports, access a Dynamic Report.

  2. In this case, product members were added from Dynamic Planning to Structured Planning. Click the Product Dimension Member (in this case it is on the row axis).

  3. Locate the DI-AutoCreated members in the hierarchy.

  4. Select them to include in the Dynamic Report and Save.

    In the image below, the newly added members are used in a Dynamic Report on the Row axis.

    spring8cs.png

Dynamic Planning: Set Up a Calculation to Run the Map

Now you must execute the write-back with a calculation.

In Practice: Model > Calculation

  1. The Model is HACPM_Financial.

  2. The Name lets you give this Calculation a name so you can re-use it.

  3. Set Run in Background to Yes.

  4. Under Type , specify Map . Under Name , specify the name of the Map that you created in the previous section.

  5. Then click Save.

    image1492zzzzz1232345690123123456901223456789012361.png

  6. Click Run.

    image1492zzzzz1232345690123123456901223456789012362.png

You can view the status of the calculation by selecting Manage > Application Administration > Request Status.

Note:
If you want to write back data regularly, in the Calculation definition, select the Schedule Pattern option, then click Scheduler Manager. See “Scheduling Download and Refresh Scripts ” for details.

Validating Data Changes in Structured Planning, Consolidation, and Reporting Applications

In Structured Planning, Consolidation, or Reporting applications, create a Dynamic Report to verify that the data has been written correctly.

For example, here is a Dynamic Report that shows the values written from the View above.

20209r.png

Dynamic Planning: Using Attributes as Filters for Write-back to PCR

You can use attributes in Maps to fine tune which data should be written back to PCR. The attributes act as a filter to select the data intersections to be written back. Instead of writing back a whole hierarchy for a particular dimension, you can write back only those members of the hierarchy that have a particular attribute. The match criteria of Common is assumed when attributes are in the map.

Example 1: Write-back without Attributes

The following map shows how data is written back without attributes; this is the traditional method. All members of the Road Bikes hierarchy in 2014 with data for account 4010 are written back. As a reminder, the map is of type Data, the transfer type is Leaf, and the Write Back ID is the name of the data load rules file in PCR.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttr11.png

Here is the calculation that runs the map.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrCalc11.png

Here is the view (Analyze > Data) of the data in Dynamic Planning.

image1492zzzzz1232345690123123456901223456789012363.png

Here is the view of the data in PCR before and after the calculation is run.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrBefore11.png

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrAfter11.png

Example 2: Write-back with Attributes

Consider the following attributes on the Time and Product dimensions of the model DW.

Time contains an attribute called Seasons. Each month rolls up to a quarter and is mapped to a season.

JanuaryQ1Winter
FebruaryQ1Winter
MarchQ1Spring
AprilQ2Spring
MayQ2Spring
JuneQ2Summer

Product contains an attribute called Material. Each bicycle part rolls up to its parent and is mapped to a type of material.

HandlebarsFrame AssemblyAluminum
GripsFrame AssemblyRubber
SaddleSaddle AssemblyLeather
Dropper PostSaddle AssemblySteel
ForkSuspensionAluminum
Susp. RemotesSuspensionSteel

Here is a map with attributes added. Because the attribute is associated with a particular dimension, that dimension does not need to be explicitly identified as a Source Dimension. All members of the Bicycles hierarchy with Material attribute Steel, with data for account 4010 in the months of 2014 with Season attribute Winter, are written back. For data to be written back, it must have both the Steel product attribute AND the Winter time attribute. As a reminder, the map is of type Data, the transfer type is Leaf, and the Write Back ID is the name of the data load rules file in PCR

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttr21.png

Here is the calculation that runs the map.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrCalc21.png

Here is the view of the data in Dynamic Planning. Green boxes show the dimension filters selected in the map. Red boxes show the attribute filters selected in the map. Yellow boxes show the data that should be written back to PCR.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrView21.png

Here is the view of the data in PCR before and after the calculation is run.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrBefore22.png

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrAfter21.png

Example 3: Write-back with Multiple Attributes

Using the same attributes on the Time and Product dimensions as in Example 2, here is a map with multiple attributes per dimension added. Because the attribute is associated with a particular dimension, that dimension does not need to be explicitly identified as a Source Dimension. All members of the Bicycles hierarchy with Materials attribute Steel OR Leather, with data for account 4010 in the months of 2014 with Season attribute Winter OR Spring, are written back. For data to be written back, it must have the Steel or Leather product attribute AND the Winter or Spring time attribute. As a reminder, the map is of type Data, the transfer type is Leaf, and the Write Back ID is the name of the data load rules file in PCR.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttr31.png

Here is the calculation that runs the map.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrCalc31.png

Here is the view of the data in Dynamic Planning. Green boxes show the dimension filters selected in the map. Red boxes show the attribute filters selected in the map. Yellow boxes show the data that should be written back to PCR.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrView31.png

Here is the view of the data in PCR before and after the calculation is run.

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrBefore23.png

ModelingImagesDataIntegration-CoreFunctionalSpecWBAttrAfter31.png


Was this article helpful?