Advanced Topics
  • 34 Minutes to read
  • Dark
    Light
  • PDF

Advanced Topics

  • Dark
    Light
  • PDF

Article summary

Performance Optimization: External Source Model LookUp Function

The LookUp function in ESM enables ESM models to reference data from one another using the specified key field(s).

Since the LookUp function inherently scans the Lookup ESM table, it can be a resource-intensive operation. Therefore, it is crucial to understand the recommended guidelines and design best practices to ensure optimal performance.

The following aspects influence the ESM formula performance:

  1. Size of the Look-up table 
  2. Number of Key fields used in the Look-up Table
  3. Length of the Key field(s)
  4. Number of references to the Look-up table in a formula
  5. Any use of circular - reference between the Look-up table(s)

Size of the Look-up Table

The best practice for look-up tables is to keep them small, ideally containing a few hundred records. The maximum recommended size is approximately 5,000 records.

When a look-up table exceeds this limit and other complexities outlined in this document, it can significantly affect performance. Therefore, dividing larger look-up tables and organizing the data into smaller models is advisable.

For any look-up tables that exceed the maximum suggested limit, we recommend contacting Planful for further guidance.

Number of Key fields used in the Look-up Table

A typical recommendation is to use one to four key fields. When adding more than one key field, be aware that performance may decrease based on their length.

Length of the Key-fields

You will achieve optimal performance with a length of 4. However, be cautious with the key fields, ensuring they do not exceed 16.

Look up table references in the formula

Using multiple lookup table references within nested IF conditions can increase the lookup footprint during queries with relatively more database calls, which will lead to slower performance. While the impact varies based on the models' size and other configurations, it could affect calculation run times by approximately 25%. 

To enhance performance, we recommend avoiding nested IF statements and simplifying the formula steps, as demonstrated below. 

  • For expressions such as below: 
    • IF(LOOKUP("ESM LookupModel","CONCAT",{CONCAT})="","0",LOOKUP("ESM LookupModel"","CONCAT",{CONCAT}))
  • Split expressions into two steps, such as below:
    • Add a new field "LookuprefValue" with the formula as LOOKUP("ESM LookupModel","CONCAT",{CONCAT})
    • And rewrite the formula for the current column as -->  IF(LookuprefValue="",0,LookuprefValue)

Any use of circular - reference between the Look-up table(s)

If you configure a lookup from Model1 to Model2 and create another lookup from Model2 back to Model1, this results in a circular reference situation during calculations. Such a setup can significantly affect performance, especially if your lookup tables are large. We have observed that this circular lookup configuration can lead to a performance decrease of up to 50% to 60%. Additionally, this arrangement prevents the application from utilizing any caching mechanisms, further contributing to a notable decline in performance.

To ensure an optimal experience, we recommend completely eliminating circular references in your design possibly by using a third model. Furthermore, consider all other recommendations provided in this article to maintain optimal performance for your calculations involving the Lookup function.


Enhanced Support for Segment Hierarchies in Templates Account mapping screens

We've introduced a significant usability improvement to our Planning Templates. You can now view segment hierarchies and select members, including rollups, for all dimensions in history and reference account mappings of the Templates. With this, any segment hierarchy member changes, such as additions, deletions, or changes to parent members, will be automatically adjusted for appropriate data aggregation for template purposes when you map rollup members. 

This significantly reduces the time and effort involved in creating or administering templates.

Availability

This feature is currently available for non-shared mode and 12-period applications only. It applies to the following template types in Structured Planning:

  • Global Template - Single Copy
  • Allocation and Global Template - Entity Copy

This functionality is unavailable if you have a shared mode or a non-12-period application. Wildcards are not supported. If your existing template uses Wildcards, template mappings will be expanded to the corresponding leaf members.

Rollout and General Availability (GA)

This feature is currently in a slow rollout mode and is intended to be made GA later this year.

Requesting the Functionality

New Customer Implementations:

New tenants (those without templates created) can request this functionality by creating a support case. Our team will perform the enablement using an automation tool, which carries out prerequisite validations and enablement.

Existing Customers:

We recommend that existing customer tenants perform validation in Sandbox before enabling in Production (specifically if you use Wildcard in Account mappings) and follow the process outlined below:

  • Create a Support Case.
  • Our support checks the prerequisites (the application should be a 12-period and non-shared mode).
  • Enables a flag (”Show_NewTemplateSetupUI_Toggle”)

Following this, you will see an option to Enable new Mapping functionality in the application.

Important Note

Clicking the 'Enable new Mapping' button initiates a migration across all your templates/scenarios to reflect the new rollup-level mapping setup and expand the Wildcards appropriately to corresponding leaf members. We advise performing this only during non-business hours as it locks the following screens until the migration is complete, which could take 30 minutes or more, depending on your Templates/Scenarios configured in the application.

  • Scenario Setup
  • Template Setup
  • Planning Control Panel
  • Simulation Engine
  • Hierarchy Management
  • Cloud Scheduler

After proceeding, a Cloud Scheduler Task for the "Template Data Migration Process" will be scheduled to start in 5 minutes. Upon completion of the data migration, you'll receive an email notification. You can also check the progress through the Cloud Scheduler Job Manager, as shown below.

User Integration REST-based APIs

Following our previous discussion on User Integration APIs Best Practices for integration, we have revised the User Integration REST-based APIs. The focus of these changes is on the Add/Edit User API.

Add / Edit User API

The Add/Edit User APIs have been enhanced to update additional roles (Support Role/Task Manager Role) and properties (Enable Two-Step Verification).

API End-point: {TenantAppURL} /api/admin/users

Options have been added to update the Support Role and Task Manager Role. A default Support and Task Manager role will be automatically assigned during user creation. These API endpoints will be helpful if you use Budget Manager or Task Manager and want to automate user role access updates.

Support Role Update

API End-point: {TenantAppURL}/api/admin/update-support-role

When you update the User role to the Budget Manager Role, they will only access the Budget Manager Experience. In one of our upcoming releases, we will provide the option to allow standard navigation, especially for Admin users requiring Standard and Budget Manager access.

Task Manager Role

API End-point: {TenantAppURL}/api/admin/update-Taskmanager-role

The task manager roles are of two types: Admin User and Regular User. When you set the role to Admin User, the user can create a personalized checklist of tasks to manage the planning and consolidation process, along with adding tasks and subtasks for other users. Meanwhile, the regular user can only comment on tasks and subtasks and attach necessary documents.

Additional User REST APIs

For more information on how to use these REST APIs, visit our developer's site.

API NameDescription
Users

Get the list of all users and user information

delete-userDelete the specified user
add-approval-role-for-budget-entityAssign approval roles such as Budget Preparer to a Budget Entity
get-approval-role-for-userGet approval role for the user with budget entity code and label
get-all-approval-roles-for-all-usersGet approval roles for all users with budget entity code and label
remove-approval-role-for-budget-entityRemove the approval role for a budget entity
allow-scenario-access-to-userSet scenario access permissions to the user
remove-scenario-access-from-userRemove scenario access permissions to the user
get-budget-scenario-accessible-to-userGet the details of all the budget scenarios accessible to the user
map-to-usergroupMap the user to a user group
un-map-to-usergroupUnmap the user from a user group
get-all-user-groupsGet all user groups information
get-usergroups-by-userGet the user group information by user


Writeback Performance Improvement with new REST APIs for Data Loading

The new REST APIs offer more efficient performance for Writeback data loads. With these APIs, the application utilizes the input data stream and loads the data directly into the structured planning without extensive chunking, as with the current load architecture that uses SOAP-based API calls. This means Writeback data loads involving large data volumes will significantly increase efficiency.

However, note that this is currently an opt-in option. If you want to enable the Dynamic Planning application's new Data Load Performance flag (Flag name: Enable DLR Performance), please contact Planful Support.

Date-based Calculations using Start and End Dates in Planning Templates

Do you know the start date and End date values for each period that will be populated on the template lines, which are hidden?

Users will be able to retrieve and use the start and end dates using the following range values in the date-based template formulas:

  • PERIODSTARTDATE: Returns the start date of the selected budgeted period.
  • PERIODENDDATE:  Returns the end date of the selected budgeted period.

Here are some examples of this usage in the Template formulas:

  • Driver-based calculation of Revenue/Commissions
    • Compare the Period Start Date to a Projected Start Date (Example: “O20”).
           =IF(DATE(YEAR($O20),MONTH($O20),DAY($O20))<PERIODSTARTDATE, $P21* AU21, AU21)
  • Number of Days for Each Period
    • Days(PERIODENDDATE, PERIODSTARTDATE)

The period start and end dates are beneficial in modeling the date-based template calculations more easily such as the start of an expense such as labor cost, depreciation, or recognition of revenue based on a date attribute. The Period Start and End dates are available for Global-Template Single Copy, Entity Copy, Line Item, and Allocation Templates.

Dynamic Planning Master/Analytic Model Design Optimization Best Practices

Below are general optimization techniques that help optimize the performance of the model aggregations, calculations, and report performance in general.

Dimension Type Optimization

  • Optimize the Key/Value type combinations to improve overall model performance.
  • Value Block size will be a critical factor in the overall performance. Keep the Value combinations within the range of 100 to 250K.
  • If dimension names or types are incorrect after saving, use the Modify Model to make changes. Either delete or modify the dimension type using this option.
  • There must be at least one Key and one Value dimension defined.
    Note:
    • Dimensions that are used on Scope and Formula References cannot be Value dimensions.
    • If you want to add Attributes, do it on the Key type dimensions for the performance.

Optimize the use of the Rollup Operators

  • Adjust rollup operators to ‘~’ in dimensions like Scenario, Time, Measures, and Reporting as needed (one-time setup).
  • Closely look and wisely use the rollups across all dimensions. If they do not need to be rolled up or across, use the corresponding rollup operators such as ~ or !. 

Define Scope and Optimize the Use of Scope

  • Defining the scope for each model improves performance and scalability by allowing users to perform calculations on a block of data versus an entire model.
  • Optimize DataDownload, Aggregation, and ClearData Calculations using the Scope.
  • Use Variables in the Scope definition.

Optimize the use of Variables and Variable Expressions

  • Use Variables and Variable expressions consistently in the Scope, Formulas, Maps, and Reports/Views.
  • Be consistent with the variable naming and document the purpose.
  • Use the new Variable Management screen to administer the Variables centrally.

Optimize the use of the Change Data Tracking Functionality

  • The Change Data Tracking feature tracks changes to the data blocks; it marks leaf-level data as dirty whenever it changes. The changes may be due to actions such as saving data entries, running a calculation, or loading data.
  • Run a full aggregation (Aggregation, None) before the feature is turned on.
  • Perform full aggregations regularly because this feature performs optimized aggregations based on tracked changes to leaf data, it is best to aggregate regularly so the amount of dirty data stays within the reasonable limit.
  • If you are not using the new Aggregation Performance feature, we encourage you to use it. This can be enabled at the Individual model level.

Cautiously Create Dimensions as Attributes for Improved Performance

  • Aggregations and Calculations in a model will be faster with fewer dimensions.
  • Create Dimensions as Attributes for reporting and analysis while mapping metadata from Structured Planning.
  • Learn more about attributes here.

Create several Smaller Purpose-Built Models rather than Large Models

  • Look to organize your models based on the use cases, targeted user profiles, regions, etc.. so they are small and efficient.
  • When you have large models of size close to 20 GB (Note: This is a compressed model size displayed on the Application Statistics), look closely at the value block, number of records, and model sizes and optimize them appropriately with the techniques discussed in this article
  • The following strategies are very effective in optimizing for the size, specifically:
      • Reduce no of #dimensions - identify and create potential attributes
      • Flatten the dimensions - eliminate any unnecessary rollups/dim members
      • Use the Rollup operators like ~ / !  
      • Optimize key/Value structures they are balanced
  • Always look to delete unwanted scenarios, periods, and any data/metadata not required for reporting/analysis. When deleting members, do a quick model validation to check and correct any dependencies to correct.
  • Use Model Administration -> Validation Report to view the Model status. This model-specific report will show any issues with the model. It is advised to review the report after the Data Download from Structured Planning or any changes to the model definitions, including Hierarchy updates, Map, Formula, Scope, and setup updates. This validation report can be run on HACPM Source and Master/Analytics models. This report can help validate and correct any model-specific issues.


    Finally, look to leverage the following functionality or features in your application or process:
  • Aggregation Performance Improvement 
  • Writeback Performance Improvements 
  • Formula Performance Improvement 
  • ClearData Performance 
  • ClearData on HACPM Financial Model

Reference Cube Optimization for Faster Template “Open” Performance

Do you configure reference Cube lines in the planning template? Are you aware using reference cube lines will impact the template open times? 

Good news: We optimized the reference cube lines query process, and it is now available as an opt-in feature.

Benefits of this optimization:

  • Templates open much faster as Reference Cube Query uses the direct data members
  • Simulation run times will improve as the templates with Reference Cube lines get the data faster
Note:
Too frequent cube processing will impact the reference cube query performance, similar to your reporting performance. We recommend exercising caution with the cube processing frequency. Refer to Automated Data Refresh and the Incremental Cube processing insights on better planning and enabling customers for the optimal setup.

The New Reference Cube optimization is available if,

  • Templates are using default time sets
  • Templates are not configured to use the Expedited template loading option
  • Time and Scenarios are not overwritten in the template RC lines configuration 
  • Scenario, Compare Scenario Setup: Actual scenario period ranges use the fiscal year start and end periods. For example, Jan to Dec period Start and End range setup as opposed to Jan to Sep or Sep to Dec type of period ranges

If any of the above configurations are not applicable, the template reference cube lines work in the normal mode, and you may not see the performance benefits.

How do I enable this optimization?

  • Contact Planful Support 
  • Our Support team will set the  “RCDataWithMember” option to “0.”  This will get the new query optimization to take effect
Note:
This new optimization is NOT available for Actual Data Templates and Timesets-based Planning templates. Irrespective of the configuration, the reference cube line processing uses the normal mode.
Note:
We encourage a judicious approach to configuring the reference cube lines and keeping them under 20 lines if the end users use the template for the data input.

Dynamic Planning - Calculation Notification Framework

Now, we can enable more user-friendly notifications on the Calculation execution status for the users when a calculation is mapped to Reports and Views.

Set the following Calculation Property from Manage -> Application Administration -> Application Settings page.

  • Enable Calculation Execution status notification for Views and Reports to YES.


When this flag is set to YES, 

  1. When a user tries to save a report/view with a Calculation mapped to it, the system checks the execution status of a calculation on the model. If it is “In Process,” it shows a notification on the status and suggests the user try after some time.
    Note:
    User updates are retained so the user can submit the report/view after the calculation execution is complete.
  2. If the user opens a report that has a calculation mapped to it, and if there is a calculation running, the user will see a notification as below:

When to use this notification framework:

  • This notification framework is beneficial to avoid any potential concurrency issues of simultaneous calculation executions of the model. 
    • When you have multiple “save” enabled reports/views with calculations mapped to them, this option allows you to notify the users of the status and prevent frequent or simultaneous submissions.
    • When Calculations are configured to run in the background.

Dynamic Planning - Calculation Scheduler Options

The Calculation Scheduler option is now available via the Dynamic Planning Model Manager. 

This Calculation Scheduler functionality offers two key improvements:

  • Easily Pause and Un-Pause Schedules
  • Enhanced “Repeat” schedule option to schedule calculation “Hourly”

Pause and Un-Pause schedules:

  • Select the schedules (multiple or all ) via the new Calculation Scheduler
  • Use the “Bulk Action” options to Scheduler On or Scheduler Off
  • Use this option to administer your Dynamic Planning schedules more efficiently in a few clicks

Enhanced Repeat Scheduler option:

Now you have the option to schedule Dynamic Planning calculations at a “Hourly” frequency.

This option allows you to schedule your calculation more frequently instead of just creating at the current “Daily.”  Many customers or solution architects create copies of the calculation and schedule them to run at different times daily to work around the limitation. This improvement eliminated the overhead of calculation maintenance significantly.

Note:
This option is currently available only via the Calculation Scheduler via Model Manager.



How to enable this “Hourly” schedule option?

  • Contact Planful Support via the Support case 
  • Support enables the following flag for the Dynamic Planning application
    • Enable Schedule Hourly Repeat Option

FTP/SFTP native Planful connector

The native FTP/SFTP connector allows users to connect to an SFTP server, fetch data files, and load the data into Planful.

Below are some details on the Provisioning, Implementation, and Best Practices of how you think and adopt this native Planful connector.

    Provisioning

  1. Is this option available for all customers?
    No. This is an Opt-in. Special licensing terms are applicable for the SFTP option. Please work with the Planful account manager.

  2. What process is involved if the customer has a license for the SFTP native connector?
    Customer SFTP site credentials are created and supplied by the Operations Support Team. Create a Support ticket for the SFTP setup to configure these for the customer application.

    The Support needs to enable the SFTPDataload option for the customer application by setting the following configuration flag:
                EnableSFTPDataLoads

  3. What are the steps to establish an SFTP load process?
    Following are the task steps that need to be completed:
    1. SFTP server configuration
    2. Data Load Rule setup of type FTP/SFTP
    3. Cloud scheduler process flow task to automate the load at a scheduled interval

    Note:
    (Internal only note) Application must be on the new Planful framework called IVY. All new applications are, by default, on the new framework. ~ 98%+ existing customer applications are on IVY. IVY framework is not available for non-12 Period applications. If the application is not on the IVY framework, we encourage adopting the new framework for your application(s).


    Implementation/Scoping Related

  4. What are the load types supported via FTP/SFTP connector?
    Currently, we support GL Data Loads, Translations, and Transaction data loads. Note that you should use the Transactions V2 data load option for the Transactions upload.

  5. What is the structure of the FTP/SFTP folder?
    There will be three folders (Input, Success, and Failure) for each Data load rule of type FTP/SFTP.


  6.  What happens when the DLR folder name is manually changed at the SFTP folder path?
    Data Load simply fails. It is not recommended to manually alter the folder names or path at the SFTP site.

  7. What happens if the Input Data Load Rule folder contains no data files?
    The data load for that Data Load Rule fails, and the details are displayed in Maintenance -> Cloud Scheduler -> Job Manager.

  8. When should I position native FTP/SFTP for my customer?
    Certainly, position this for any customer looking for an option to automate their load process using a file upload; this option would make perfect sense.
    Note:
    This option automates the customer data load process with the scheduling option(s) available via the Planful Cloud Scheduler process flow.

  9. Is this option more appropriate to use over Boomi?
    May not!! The Boomi-based process flow is most appropriate for customers with the following broad requirements:
    1. To automate the data extraction from the Customer source system(s)
    2. To perform advanced transformations and create mapping rules for the Planful data structures
        When a customer has these requirements, a Planful data integration specialist `performs the scoping of such work and develops a solution via Boomi-based process flow or other integration tools.

  10. Are there any sort of file/data transformation or manipulations possible if we use the native FTP/SFTP option, and how do we configure it?
    Basic file transformations are available, such as column Join/Spilt/Move/Replace/Create, etc.. Configure them at the Data Load Rule level. For more information, go through the help.
    Additionally, if you are looking for translations from your local chart of accounts file to a common chart of accounts configured in Planful, leverage Planful’s Data Integration Translations functionality as you do for the regular data loads in Planful with the established translations.

  11. I have a customer with Boomi - SFTP process flow. Should we consider this option?
    We are not suggesting a change to the existing process flows. Boomi flow may have more complex logic and mapping to the Planful GL mappings in a customized way. A conversion might need a substantial evaluation, planning, and change management.


    Best Practices /Guardrails

  12. How many files can I configure to load via FTP/SFTP, and what are the general guidelines/guardrails?
    We can run up to 20 files of size 1 GB each. We strictly recommend “Incremental data loads” for GL and Transactions. Our Transactions V2 has support for incremental loading as well.

  13. What are the file types supported?
    Planful supports importing data from .xls, .xlsx, and .csv formats.

  14. How many days of History can I keep in the SFTP folder?
    We encourage implementing a periodic purging strategy. If you are loading multiple files and the load frequency is more than four times daily, we encourage removing files older than a week from the Success and Failure folders. In other situations, you can keep up to 30 days. This is to ensure we have consistent performance while connecting and loading the data.

  15. Do we have an automated way of deleting the old files on a scheduled basis?
    Currently, we don’t have an automatic deletion capability from the application.  It is a manual process to clear the files from the corresponding FTP/SFTP site folder path.  FTP/SFTP site administrators access the folders and delete the files manually.

Insights of the Smart Batching for Calculations Mapped to Views and Reports

Calculations mapped to a View or Report can be triggered and run in the background or foreground using the flag option “Enable Calculation Execution in Background for Views and Reports”. This option can be configured from Manage > Application Settings in SpotlightXL.

Note:
When calculations run in the foreground (or in synchronous mode), the calculation status will not be checked, and multiple instances of the same calculation can run simultaneously and could potentially impact the reporting. This option is not recommended for multi-users scenarios.

When the flag “Enable Calculation Execution in Background for Views and Reports” value is set to YES, calculations will run in the background (Note: The calculation must be configured to “Run in Background” at the Calculation setup level.). It leverages the “Smart Batching Interval for Calculations”. This option can be configured from Manage > Application Settings in SpotlightXL.

When “Smart Batching Interval for Calculations (in Minutes)” is NONE, 

  • When a user submits a calculation, the calculation will be submitted for execution. If another copy of the same calculation is executed, the newly submitted one may fail depending on the overlap status/concurrency. Ensuring you have an optimal scoping setup for your calculations is recommended to avoid any unforeseen issues with the data due to the calculation step dependencies.

When “Smart Batching Interval for Calculations (in Minutes)” > 0

  • When a user submits a calculation, the calculation will be scheduled with a specified delay time. Multiple copies of the same calculation can be scheduled, provided runtime variables passed from the view or report have different values (at least one variable). Calculations are batched using the key consisting of tenant code, model name, calculation name, and variable names and values.
  • If a scheduled calculation with the same key already exists, the new request will be batched with it. Otherwise, a new calculation will be scheduled. In a heavy concurrency scenario with small calculation run times, we recommend leveraging appropriate smart batching intervals between 1 minute to 60 minutes, depending on the largest calculation run time from your report/view. Ensuring calculations are scoped with runtime variables on the key dimensions is important.
Note:
When you change the Application Settings mentioned above, you need to log out and log in again to take the new settings into effect.

User Integration APIs Overview and Best Practices

Below is a high-level overview of various User integration scenarios of Structured Planning APIs to create/update users and user approval roles. For more information on the relevant APIs, check out this link.

Use Case - DescriptionRecommendationsAPI ReferencesAdditional Notes

Use Case: Adding New Users to the Planful application

1. Create the User
2. Map User to a User Group
3. If he is a planning user, map the user to an Approval Role and Budget Entity for planning
4. Add Scenario Access
5. Ensure update the Dimension Security
1. CreateNewUser
2. MapToUserGroup
3. AddApprovalRoleForBudgetEntity
4. AllowScenarioAccessToUser
5. UpdateDimensionSecurityByUser
6. ProcessDimension
1. The Task Manager role is not part of the CreateNewUser API. A default "Regular User" is applied when a user is created in the system.
2. Similarly, a few opt-in flags, such as Two-step verification, Disable Password Verification Check, Enable Post Message, etc., are unavailable, so all these flags must be set from the application when needed for applicable users.
3. For the Dimension security update, ensure to complete your "Global" Dimension Security configuration in the Planful application.
4. Also, it assumes that you have an Approval Role, Budget Entities, and Scenarios created in the Planful application. If not set up, these are to be configured in Planful and update the integration.
Use Case: Updating user Properties1. Update the User Properties such as Active/Inactive Status, Navigation Role, Reporting Role or Support Role using Update User Property API.
2. If a user approval role needs to change, use Remove and Add Approval Role For Budget Entity APIs.
3. To update the user group mapping - to disable a user group and or add to a user group using -> MapToUserGroup API.
1. UpdateUserProperty
2. RemoveApprovalRoleForBudgetEntity and / OR AddApprovalRoleForBudgetEntity
3. MapToUserGroup

These APIs can be called in any order depending on your integration requirement.


If User Group or Approval Roles are to be updated, ensure the respective User Group or Approval Roles define in the Planful application.


You can leverage GetUserGroupbyUser API to get the list of User Groups, GetApprovalRolesForUser to get the list of Approval Roles for User, or GetAllApprovalRolesForAllUsers API to get all approval roles for all users.

Use Case: Update Dimension Security for All Users1. Complete the "Global" Dimension Security configuration in Planful.
2. Use Update Dimension Security API to populate the Dimension Security
1. UpdateDimensionSecurityByUser
2. ProcessDimension
After the Dimension Security update, it is recommended to process dimension and apply dimension security for reporting purposes.
Use Case: Extract Dimension Security for All Users1. User the Get Dimension Security By Segment API to extract the list by User.1. Get_Dimension_Security_By_SegmentYou may optionally use, GetUsers API to get the list of All Users so you can loop through the User list and make a call to get the Dimension security by segment API.
Use Case: Scenario Access update to a User1. Ensure Scenarios are created
2. Optionally, use GetAllBudgetScenarios Accessible information and, ensure access is updated appropriately with the Add or Remove Scenario access.
1. AllowScenarioAccessToUser
2. RemoveScenarioAccessFromUser
Optionally, use GetAllBudgetScenarios Accessible information and, ensure access is updated appropriately with the Add or Remove Scenario access.

Custom URL Option -  Here is what you should consider

With the Custom URL/ Subdomain feature - The current URL structure will be modified to reflect a unique customer name.

For example, if the customer name is "demoabc," the new URL will be demoabc.Planful.com.

This feature is available as an opt-in for the customers, and you can reach out to Planful Support to create a Custom URL for your application. 

Note:
This feature is available only for the one login unified tenants for Structured and Dynamic Planning applications. Here is the help documentation.

Here are a few critical items you need to consider before initiating the customer URL process to prepare the customer admins and the users:

User Experience: Post-migration, the existing URLs “epm01.planful.com” will continue to work but without login via SSO support on the login page.

  • Example: If users actively use the platform on "epm01.planful.com" and the URL changes to "demoabc.planful.com," the user will be prompted to log in again with the updated URL.

User Auth and Settings: As part of the URL change, certain Authentication and settings may be affected. This includes the loss of "Remember User Name" and "Remember Password" settings on the landing page, as well as the “reset” of cookie settings related to 2FA (Two-Factor Authentication) and OTP (One-Time Password) remembrance. Additionally, notification settings may need to be reconfigured.

  • Example: After the URL change, you may need to re-enter your username, reset your password settings, and adjust your notification preferences.

API Integrations: It is important to note that the change in the customer URL will not impact API login flows, such as OAuth, Web Services, and Web API. These integrations will continue to function seamlessly.

  • Example: If your organization has API integrations with the current URL, it will remain unaffected by the URL change.

Third-Party Integration:  It is important to note that the change in the customer URL will not impact any integration. These integrations will continue to function seamlessly.

  • Example: If your organization has any third-party integrations set up with the current URL, they will remain unaffected by the URL change.

OLH: The existing OLH (Online Help) and other static pages will not receive customized URLs and will continue to function as-is.

Migration: It would be done in a pre-scheduled window of up to 30 mins off-peak time for the migration or the update. 

Insights on Reporting Currencies

Planful allows flexible reporting currency options in Structured Planning to allow the user to report at the alternate currencies (other than their Common Currency) or perform the conversion at a constant or prior year rate, etc., for the reporting/analysis.

Below are considerations while setting up or troubleshooting a Reporting Currency:

1. Ensure the Currency Exchange Rates are provided, if the Currency Exceptions are configured, check if the exception configurations have the “rate” defined. When there is no rate defined for any period, no currency conversation happens.

2. Special note on the Currency Type usage: 

      1. If the “Currency Type” for the currency conversion is set to “Not Translated (NT)”, no conversion happens. So, please check your account configurations. 
      2. If the “Currency Type” is set to “SAME”, the exchange rate will be considered as “1” as a result you will see the same number after the conversion as the Local Currency.

3. A note on Opening Balance: Behavior is driven by the “Account Type” configuration. For all “Flow” accounts, at the end of the FY, balance will be reset. However, for Balance Accounts, the behavior is dependent on the following configurations: 

      1. Do Not Consider Opening Balance: When selected, Opening Balance won’t be considered. 
      2. Overwrite Opening Balance: This will be applicable when you want to overwrite the Opening Balance value from the Scenario setup screen for the Planning scenarios. Otherwise, by default, the History Periods configuration will be used for the Opening Balance.

4. A note on CC Data usage in Reporting Currency computation: 

      1. Reporting Currency computation involves the following conversion steps: 
        1. Local Currency to Reporting Currency 
        2. Common Currency to Reporting Currency
      2. When the Reporting Currency, Currency Code (ex: USD) is same as the Common Currency (ex: USD), you may want to skip this data copy, for this you need to select “Do NOT Consider CC Data”

5. A note on CTA (Currency Translation Adjustment): By default, the default CTA configuration will be used, however, for the purposes of Cash-flow reporting, or what-if analysis, you can override this behavior by leveraging the CTA Sets capability which allows us to compute CTA on the select Accounts only. You need to define, CTA Sets to “Overwrite Default CTA” for the Reporting Currencies.

6. For the Plan or Forecast Scenario, Closed Periods, Reporting Currency Data will be recalculated during the Closed Period refresh. 

7. A note on the number of Reporting Currencies and how you can process Reporting Currencies: 

      1. We can have a max of 5 Reporting Currencies. 
      2. More reporting currencies may impact performance, particularly when performing the Consolidation Process.
        1. Be judicious while configuring Reporting currencies in the application. Ensure you configure Reporting currencies that are critical and necessary.
        2. And, while running a consolidation process, be judicious about what Reporting Currencies you want to include. Use the Consolidation Process user selection panel and ensure you select to include only the required Reporting currencies as part of the consolidation process.
        3. Currency Conversions process flow option also performs the Reporting Currencies computation. This would be a great option to take advantage of the Reporting currency analysis for Planning scenarios without relying on the Consolidation processing capabilities. If there is a need for the CTA type of calculation support, you may need a Consolidation setup.

Insights and Best Practices Using Opening Balance in Data Loads

Planful Data Integrations in Structured Planning application allows mapping “Opening Balance” from the source system to Planful while loading MTD-type data load. This column value will be used to compute YTD following the data principles. 

With our new data fabric framework IVY, we have subtle improvements to support Opening Balance while consistently maintaining the data integrity related to the MTD and YTD balances.

ScenarioData Load FormatOpening Balance Consideration

Actual

Load for the FY1 all periods with Opening Balance 

The Opening Balance value will be considered from the first period and performs the computation of YTD

Load for the multiple fiscal years (FY1 & FY2 etc.,) all periods with Opening Balance



The Opening Balance value will be considered from the first period of the first fiscal year and performs the computation of YTD.  For FY2 and beyond, the Opening Balance input will not be considered as the application auto-computes the balances.

Best Practice - In case of varying Opening Balances to be provided, load separately for each fiscal year.

Load with the mid of fiscal period or individual months without including the first period of the fiscal year

The Opening Balance value will be considered from the first fiscal period, if there is no pre-existing data and performs the computation of YTD for all the periods starting with the first period.  If some previous periods are locked, the Opening Balance will be applied to the first open period of the fiscal year.
Opening Balance won't be considered when pre-existing data exists for the previous periods. 

Best Practice - we highly encourage not to include Opening Balance in monthly uploads. if there is a need to ensure the balance is accurate to be considered from the beginning of the year and have the periods unlocked and loaded from thefirst period. 

If there is no true opening balance to be maintained, “unmap” the Opening Balance column from the Data Load rules data mapping section.


Non-Actual Scenario

Load for the first fiscal year ( FY1) of the scenario for all periods with Opening Balance 

The Opening Balance value will be considered from the first period and performs the computation of YTD
Load for the multiple fiscal years (FY1 & FY2 etc.,) all periods with Opening Balance The Opening Balance value will be considered from the first period of the first fiscal year only for the non-actual scenarios.

Even if you do yearly loads for a non-actual scenario, the application will honor the opening balance only for the first fiscal year's first period.

Loading the data for Period 2 and beyond with an Opening Balance column when the previous period (like Period 1) is closed.

The Opening Balance value will not be honored as Period 1 is closed.

Applicable scenario types include: Forecast, and Plan scenarios with closed periods.

In summary,

  1. Be judicious in using the Opening Balance loads in your DLRs; keep the Opening Balance loads only during the beginning of the fiscal year. 
  2. When possible, load the YTD data loads instead of the MTD load for the Balance accounts.

Number Formatting Ideas on Dashboard Tables with Custom or Calculated Members

We can present alternate column or row-level number formatting for your Dashboard tables to represent your key metrics with the following method.

In the below example, we have $ and % representations made in the Dashboard table views.


  1. To represent the % format on the dashboard view, create the calculated member or custom member below with a type cast as String and concatenate with “%” 
    • CStr(CInt(([Account].&[304]/[Account].&[216])*100))+"%" 
  2. If you need to apply the variance-based calculations for Actual to Budget variance for the same metrics, create custom members on the scenario dimension with the expression below to force the account in context for the % format. 
    • IIF(Account.CurrentMember is [Account].&[993],                                      
      CStr(CInt((([Account].&[304],[Scenario].&[1])/([Account].&[216],[Scenario].&[1]))*100 - (([Account].&[304],@CURBDGT@)/([Account].&[216],@CURBDGT@))*100))+"%",([Scenario].&[1]-@CURBDGT@)

This method is to show you the custom member or calculated member capabilities with our dimensional member-based expression.

Zoom to Data Leaves for Excel Analysis Views is available now for Direct Access to Planning Models. 

Support for the Zoom-in -> Data Leaves option allows you to query your Direct access planning models much more efficiently without going through the number of clicks. 

When you zoom in on large dimensions like Project or Client, you may experience challenges seamlessly navigating to the members with data. This limitation has now been addressed with our new Data Leaves option for Excel Analysis Views built for Direct Access to Planning for efficient querying for the dimension members with the data when you Zoom in. 

Currently, this feature is available as an opt-in with a flag (Enable Axis Query API flag). This option will become available as a standard feature in the upcoming releases.

Optimizing the DownloadData Calc Performance with Periodic Maintenance of Financial Model

Do you know periodic maintenance of the Finance model (HACPM_Financial) would help your overall calculation performance involving DownloadData step? In a customer situation, we have seen their performance improve by over 40% just by the Clear model on HACPM_Financial.

Depending on the design and requirements for housing the data in HACPM_Financial, we recommend introducing a maintenance process to keep only relevant data in the HACPM_Financial.

You can employ the following 2 approaches

  1. Clear Model from Model -> Setup screen. Using this Clear Model option on the HACPM_Financial clears all the data, and your model will still be in the Generated state. With this method, your HACPM_Financial will be cleared entirely and in that state until you run the DownloadData step via your Calculation. 
  2. ClearAllModel Calculation Type with Scope. This will help you selectively clear the data and retain the required data within the HACPM_Financial by applying filters on Scenario, Time, and Measures.

Use this data maintenance process to ensure that you have consistent performance with the DownloadData step and the DP calculations.

Incremental Cube Processing Insights

Do you know Incremental Cube Processing happens even without your application's Automated Data Refresh (ADR) option turned ON? Read below to know how?

Do you know we can control the frequency of Automated Cube processing events in your Job Manager with the Automated Data Refresh (ADR)?

Incremental or change-only data processing for Cube helps see the data faster in reports. And the performance of the Incremental processing will be significantly faster than the Full process. In most cases, improvement will be ~4X. 

Here are some key insights into this capability

  • Our data processing framework IVY made Incremental processing functionality possible by tracking the data changes across our data sources like Templates/DLRs etc. Learn more about IVY here! At the time of writing this, we have approx 70% of applications on the IVY framework. 
  • As IVY continues to track the delta changes across all your data sources like Templates, Journals, or the Data Loads, when a user triggers the cube process manually, if there is a delta available, Cub processing will be performed automatically in the “Incremental” processing mode rather than the traditional “Full” process model This gives you a significant advantage. So irrespective of the ADR option, you can take advantage of the “Incremental” processing. The only pre-requisite here is - the application should be running on IVY. 
  • With the Automated Data Refresh (ADR) option, you can automate the cube refresh process so you can see the data after the template changes or journal postings. See more information on Automated Data Refresh here! However, depending on the template activity rate, data loads or journal posting, etc.., you may see a continuous stream of automated cube-processing events. This situation will likely impact your report performance and have too many cube processes in the Job manager! This situation can be optimized by leveraging a “Wait time” configuration. 

When to consider ADR “wait time” configuration? 

  • If you start to see frequent cube process events in the Job manager (every few seconds). 
  • or, if your report users experience lag in the report run times. If you see a longer run time on key reports if you run them right after the cube processing event!

How this ADR option of “wait time” works?

  • A value to this flag specifies the number of seconds (say 600 seconds) the cube processing task is to be delayed! Any duplicate requests will be discarded while a task is processed or in Queued State. 
  • This option is available only if the application is fully on the IVY framework.

How to get this ADR option of “wait time” enabled?

  • Contact Planful Support via the Support case. 
  • Support adjusts the flag value in the seconds (say 600 seconds / 1800 seconds) 
    • Debounce WaitTime
Note:

A long value may negatively impact you as you will miss the opportunity to take advantage of the “incremental” processing when you have significant data changes.

Option to Optimize the DLR process when you have multiple Writeback steps in the Dynamic Planning Calculation

By definition, it is not recommended to have the Same Data Load Rule used in the multiple Writeback Map Steps in the same calculation; However, if you have to use it for any specific situation, you can optimize the load behavior using the following option.

  • Consolidated Writeback

When to consider this option?

  • When the same DLR ID is referenced for the Writeback ID of the different Map Steps listed on a Calculation

How does this work and improve the performance?

  • Data from all writeback steps are accumulated and then written back to Structured Planning once all the calculation steps have been completed.

What happens if there are multiple DLR IDs in the same calculation across different maps?

  • Accumulated data is written back to Structured Planning separately for each DLR.

How to get this enabled?

  • Contact Planful Support via the Support case. 
  • Support enables the following flags for the Dynamic Planning application: 
      • Enable New Writeback Features 
      • Enable Consolidated Writeback

Was this article helpful?