September21 Release Notes
  • 22 Minutes to read
  • Dark
    Light
  • PDF

September21 Release Notes

  • Dark
    Light
  • PDF

Article summary

Dynamic Planning: Aggregate Functions in Maps

The Aggregate functions are now available in the ESM Maps. You can use these functions directly while moving the data from one ESM to another ESM through maps. An Aggregate Function column has been added to the Maps.

The following Aggregate functions have been included in the maps:

  • COUNT

  • SUM

  • MIN

  • MAX

  • AVG (average)

The following table provides a quick overview of the functions.

Function NameDescription

COUNT

Returns the count of rows of the source ESM column where the field value is not empty/null/undefined. This is applicable to Text, Numeric, and Date field types on source ESM. The mapped target ESM field has to be of type Numeric.

SUM

Returns the sum of all values from a list of numbers in the model. This is applicable to Numeric source fields.

MIN

Returns the minimum value from a list of values in the model. This is applicable to Numeric and Date source fields.

MAX

Returns the maximum value from a list of values in the model. This is applicable to Numeric and Date source fields.

AVG

Calculates the average for a specified range of values. This is applicable to Numeric source fields.

Note:
The Target field type will be the same as the source field type, except for the COUNT function where the field type will always be Numeric.

Previously, the Aggregate functions were not available in the Maps.

Now, when you have a model with a lot of data and want to compile and organize the raw data into a more consumable and summarized format, you can use the Aggregate functions.

Business Example

If you have a model containing data about Sales opportunities. The model can contain raw data about leads, different sales owners, sales segments, annual recurring revenue (ARR), and so on.

Suppose you want to aggregate data and find specific data related to a sales segment or sales owners. In that case, you can use the aggregate functions available in the map and pull only relevant data to another ESM. The functions provide the flexibility to organize data and identify different key values such as maximum value, minimum value, total count, average, etc. You can create a different model with only relevant fields that are required. The following image shows the model created with only relevant fields.

You can create a map with your actual ESM as the Source Model and the new model as the Target model. The following image shows the map containing different fields such as Sales Segment, Opportunity Owner, Lead Source, ARR.

You can group the data as per your requirement and configure the fields accordingly in the Maps. You will have the Aggregate Function column available in the map to select the required function for getting the output. In the example above, the data is grouped by Sales Segment, Opportunity Owner, Lead Source, while the ARR fields are used to find the minimum, maximum, count, average, etc. So, when you run the calculation, you get summarized data that only provides information about the relevant fields.

If you want to filter and segregate data only for a particular field, you can easily do it. For example, you can easily find and track the count, sum, average of opportunities generated by a specific Opportunity Owner.

Business Value

With these functions in the maps, you can directly find the key point values such as sum, count, maximum, minimum, and average while moving the data from one ESM to another. These functions significantly reduce the effort and time required to organize the data and find key values. You have the flexibility to customize the fields in the map based on your requirement and see different outputs with just a click of a button. You can map a single source field to multiple target fields as per your requirement. For example, ARR was mapped to multiple target fields like sum, count, etc. It is not mandatory to map all the source fields in the map. For example, if you do not want to track data for Lead Source in the example above, then you can just update the Default value for that field to NA and update the Maps to Column value to DimensionFilter. The calculation will aggregate the data based on the updated values and you will see data only based on the Segment and Sales Rep.

Note:
  • These functions are applicable only for ESM to ESM maps.

  • The aggregate functions consider the data as case insensitive. For example, if you have data as East, east, EAST, then the aggregate functions can use any of them for grouping and the result would be random.

In Practice: To use the Aggregate functions in Maps

  1. In SpotlightXL, select the Model task and the Map subtask.

  2. Select the name of the Source Model defined on the Source Model page.

  3. Enter a name for this map.

  4. Add the required description.

  5. Select the type Data for loading external source data.

  6. Select or specify the name of the ESM you want to map the Source ESM to.

  7. For Append Missing Dimension Members, select No when mapping to another external source model.

  8. Optional, specify the name(s) of one or more fields in the Source ESM to use for filtering the data you want to load. Under the Value column, you can specify a leaf-level field.

  9. In the Fields column, specify fields from the Source Model that will be mapped to fields in another External Source Model.

  10. Select any of the following in the Maps To column:

  • DimensionFilter: indicates that there is no field in the Source ESM to map to the field in the Target ESM. If you select DimensionFilter, you must specify the Default Value column.

  • Value: indicates that this field in the external source is a value to be copied as is to the specified field in the target ESM.

  1. In the Target Field column, select the name of the field in the target ESM to correspond with this field in the source ESM

  2. In the Default Value column, specify the text, date, or number that should be placed into this field for all rows in the target ESM if Maps To is DimensionFilter. Or, leave this column blank if Maps To is Value.

  3. In the Aggregate Function column, select the required function such as COUNT, MIN, MAX, etc.

Business Use Cases

Let’s understand using the Aggregate functions with some examples. If you have a model containing raw data about leads, different sales owners, sales segments, annual recurring revenue (ARR).

Now, if you want to track your Sales opportunities based on different parameters, for example, you can know the maximum ARR based on Sales Segments, you can know what the opportunities are based on a specific lead source, and many more.

You must perform the following:

  • Create a target model with the required fields.

  • Create a Map to move data for relevant fields from your Source model to the Target model.

  • Select the required Aggregate function.

  • Run the calculation to view the output.

Creating a Target Model with Relevant Fields

You must create a Target Model to view and track the data for relevant fields based on your requirement. In this case, as you want to track Sales Opportunities based on different parameters, you can create your Target Model to have fields like Segment, Sales Rep, Lead Source, Minimum ARR, Maximum ARR, and so on.

To do that, navigate to SpotlightXL and create an External Source Model. For example “Opportunity Analytics”

Add the required fields to the Field column of your “Opportunity Analytics” model. If you want to segregate data based on specific fields, then include only those fields to your model. For example, Segment, Sales Rep, Lead Source, Minimum ARR, Maximum ARR, etc.

Once you add fields, specify the type in the Type column. For example, Text, Numeric, Date, and so on. Based on the data in your Source ESM, set the type as Text or Numeric. For example, Segment, Sales Rep, Lead Source are of Type - Text, and the different ARR fields are of Type - Numeric.

In the Include in Data Load column, specify Yes as the value is coming directly from the data source.

Once you have created the Target model and set all the required fields, navigate to creating a Map.

Creating and Configuring a Map

You will be creating a Map to pull the data from your source model to your target model. You must define the Source Fields and then map them to the respective Target Fields. Once you define the source and target fields based on your requirement, you can select the Aggregate functions for the fields. By selecting the Aggregate functions in the Map, you can Aggregate the data as per your requirement while simultaneously moving the data from Source Fields to Target fields. You have the flexibility to configure only the fields that you want to see in the target output.

Let’s understand, configuring a map for different use cases.

Scenario 1

For your Sales Opportunity data, if you want to aggregate data based on Segment, Sales Rep, and Lead Source, and track the Minimum ARR, Maximum ARR, Average ARR, Total ARR, and Count of Opportunities for a Sales Rep in a specific Segment or if you want to know the Lead Source for a Sales Rep’s generated Opportunities.

  1. In SpotlightXL, select the Model task and the Map subtask.
  1. Select the “Opportunity Data” as the Source Model.
  1. Enter a name for this map. For example, “Summarize Opportunity”.
  1. Select the type Data for loading external source data.
  1. Specify the Target Model name that you created. For example, “Opportunity Analytics”
  1. In the Field column, specify the fields that you want to move from your Source Model - Opportunity Data. In this case add Sales Segment, Opportunity Owner, Lead Source, and ARR.
  1. Select Value in the Maps To column for the fields that you want to be copied. In this case, specify Value for Sales Segment, Opportunity Owner, Lead Source, and ARR.
  1. In the Target Field column, select the name of the fields defined in the target model. In this case, specify Segment, Sales Rep, Lead Source, Min ARR, Max ARR, Avg ARR, Count of Opportunities, Total ARR.
  1. Leave the Default Value column blank for all fields.
  1. In the Aggregate Function column, select the following for the respective ARR target fields:
    • MIN for the Min ARR field

    • MAX for the Max ARR field

    • AVG for the Avg ARR field

    • COUNT for Count of Opportunities field

    • SUM for Total ARR

The following image shows the Map configuration for the above use case.

Run Calculation with a Map

Now, for the above Map, you must create a calculation. The calculation will run the map.

In the Calculation, select the target model name. In this case, select “Opportunity Analytics”.

Select ExternalSourceMap as the type in the Type column.

Specify the name of the map created. In this case, use “Summarize Opportunity”.

When you run the calculation, you will see that the complete Opportunity data is segregated as per the Map configuration and displayed in the Target Model.

Reviewing Output in the Target Model

Once you run the calculation, you will see that the complete Opportunity data is segregated based on the Sales Segments, Opportunity Owners, Lead Source. Enterprise and SMB. So, based on the fields configured in the Map you would see the data segregated in the output. In this case, as you have mapped the Sales Segment, Opportunity Owner, Lead Source, and ARR source fields with their respective Target fields, you see the values grouped accordingly. You can see the Minimum, Maximum, Average ARR, Total ARR, and Count of Opportunities for all Sales Reps for their Respective Segments. You can also see the Count of Opportunities generated by individual Sales Rep in a specific segment. You can see the lead sources used by a Sale Rep to generate the Opportunity.

Once the data is displayed in columns, you can easily filter data for a specific Sales Rep and track all the details as shown in the above image. For example, if you want to know data for the Sales Rep - Mary, you can just select Mary from the Sales Rep column. Once you select the Sales Rep, you can see the Segments where the Sales Rep has generated the opportunities. In this case, Mary has generated the opportunities in the Enterprise segment, with the total Count of Opportunities generated by her is 8. Out of the total, 7 were generated from the Lead Source “Partner” and 1 was generated from the Lead Source “Partner”. You can also check the Total ARR for Mary along with the other values as Min ARR, MAX ARR, and Average ARR. You can similarly view and track details for any Sales Rep.

Scenario 2

For your Sales Opportunity data, if you want to aggregate data based on Segment, and track the Minimum ARR, Maximum ARR, Average ARR, Total ARR, and Count of Opportunities.

  1. In SpotlightXL, select the Model task and the Map subtask.
  1. Select the “Opportunity Data” as the Source Model.
  1. Enter a name for this map. For example, “Summarize Opportunity”.
  1. Select the type Data for loading external source data.
  1. Specify the Target Model name that you created. For example, “Opportunity Analytics”
  1. In the Field column, specify the fields that you want to move from your Source Model - Opportunity Data. In this case add Sales Segment and ARR. As you previously had Opportunity Owner and Lead Source also there in your Map, now update these fields in the Map and specify “None” in the Field column.
  1. Select Value in the Maps To column for the fields that you want to be copied. In this case, specify Value for Sales Segment and ARR, while specify Dimension Filter in the Maps To column for the fields with “None”.
  1. In the Target Field column, select the name of the fields defined in the target model. In this case, specify Segment, Min ARR, Max ARR, Avg ARR, Count of Opportunities, Total ARR.
  1. In the Default Value column, specify “NA” for the Fields with “None”, and leave this column blank for Sales Segment and ARR fields.
  1. In the Aggregate Function column, select the following for the respective ARR target fields:
    • MIN for the Min ARR field

    • MAX for the Max ARR field

    • AVG for the Avg ARR field

    • COUNT for Count of Opportunities field

    • SUM for Total ARR

The following image shows the Map configuration for the above use case.

Run Calculation with a Map

Now, for the above Map configuration you must create and run the calculation. 

Reviewing Output in the Target Model

When you run the calculation for the above configuration, you will see that the complete data is segregated based on the segments Enterprise and SMB.

The Segment column only shows values whereas the Sales Rep and Lead Source columns don’t show any value as they were set to “NA”.

So, based on the fields configured in the Map you would see the data segregated in the output. In this case, as you have only mapped the Sales Segment source field with the Segment target field, you see the values grouped by Segment.

You can see the Minimum, Maximum, Average ARR, Total ARR, and Count of Opportunities for the Enterprise and SMB segment.

These functions help to compile and summarize huge data from a source system into a consumable output that can easily be used for accurate analysis and tracking.

Dynamic Planning: Heads Up! Additional Use Case Added for the RANGELOOKUP Function in Online Help

Platform: Choose Multiple Reporting Hierarchy in Export Data

Now, when you want to export the data from the Export Data screen, you can select multiple members from the Reporting Hierarchy section. Only the selected data is filtered and exported in an excel sheet.

You can select the members in the hierarchy at any level, and only the selected member details are exported. If you select a roll-up member, all the corresponding leaf members' information in the hierarchy under that roll-up member is also exported.

Business Value

This functionality provides you the capability to export the data with multiple Reporting Hierarchy members for the selected Scenario. Initially, this capability was limited to only one Reporting Hierarchy member. Selecting multiple hierarchy members reduces the effort of exporting the data for the required scenario.

In Practice: Accessing the Reporting Hierarchy Enhancement

  1. Go to Maintenance > Data Integration > Export Data.

  2. Provide the required information on the Export Data Setup page. You can provide the following details in each section:

- Setup File Format:

  1. File Type - Select the file format for the exported file.

  2. Data Format - Select the order in which the data should appear in the exported file.

- Setup Columns and Order:

  1. Column

  2. Header Name

  3. Value

  1. Click Save and Next. The Export Data page appears. You can view the checkboxes in the Reporting Hierarchy section using which you can select the required members and export the data related to them.

Platform: Set decimal format for Amount in Export Data

Now, you can format the decimal places for the Amount column during Export Data configuration. Following are the default decimal values set for each file type for Export Data:

  • Excel - 2

  • Txt - 6

  • CSV - 6

You can format these values to any value between 0 to 6. The decimal value of the numbers in the Amount column in the exported file is displayed based on this configuration. In excel exports, the decimal formatting is applied to the original value.

In Practice: Accessing the Decimal Format option

  1. Go to Maintenance > Data Integration > Export Data.

  2. Provide the required information on the Export Data Setup page. You can provide the following details in each section:

- Setup File Format:

  1. File Type - Select the file format for the exported file.

  2. Data Format - Select the order in which the data should appear in the exported file.

- Setup Columns and Order:

  1. Column

  2. Header Name

  3. Value

Set up the value for the Amount column in this screen to define the decimal values.

Platform: Restricted Duplicate Translation Lines

Now, when you add translation lines in the application, you cannot save the updates if there are any duplicate records. When you click Save after adding the translation lines, the application validates the added records for any errors. If there are any duplicate lines, the application displays an exception report showing the duplicate lines. You can use the Autocorrect and Save option to remove the duplicate lines and save the data.

Platform: Asynchronous Processing for Export data

Export Data is processed asynchronously in all newly configured applications. Asynchronous data processing increases the application’s performance, and the Export Data request is processed with a quick response time.

Business Value

When Export Data requests with large sets of data are processed synchronously, the application performance is impacted. For an improved experience and efficiency, it is recommended that asynchronous processing is enabled in your application. This feature is available as an opt-in for the existing customers, and you can reach out to Planful Support to enable asynchronous exports in your application.

Platform: Refresh Preloaded Data API

The Refresh Preloaded Data API updates the Preload Budget Scenario with the data from the reference Scenario or the CC data based on the Scenario setup.

You have to generate the APIKey first to use the API. You can pass the following information to generate the APIKey.

Method

POST

Syntax

<Provide the Application Url>/auth/login

Sample Body

{
“Username”: “xyz@planful.com”,
“Password”: “Ayz@123”,
“TenantCode”: “Apr2021Demo”,
“ClientType”: “ExcelAddin”
}

 When you run this, an authorization key is generated. Use this key as the APIKey to refresh the preload budget scenario.

Do the following to refresh the preload budget scenario:

Method

POST

Syntax

<Provide the Application Url>/Scenario/RefreshPreloadedData?scenarioCode=<provide the scenario code>

Note: You can find the Source Code from the Scenario Setup screen.

Headers

Key : APIKey

Value : Authorization Key

Response

{
“status”: “Success”,
“message”: “Refresh Preload Budget data completed successfully”
}

 The API refreshes the data and displays an appropriate successful message.

Platform: NetSuite Connector Update

Now, when you create a new NetSuite profile, in the Configure Profile screen, you must select the Signature Type as HMAC-SHA256 to establish a successful connection between Planful and NetSuite Databases. You can use this profile to pull data from the NetSuite database.

Business Value

Initially, HMAC-SHA1 was the only one option available for Signature Type. Recently using this signature type has resulted in connection issues between NetSuite databases and Planful applications. To address this issue, HMAC-SHA256 signature type is added as an additional option in the Signature Type list. You can use this signature type to connect with the latest NetSuite version.

In Practice: Accessing the Decimal Format option

  1. Go to Maintenance > Admin > Configuration Tasks > Cloud Services.

  2. Click the NetSuite Connect tab.

  3. Click Add to create a NetSuite Profile. The Configure Profile screen appears.

Here you can view the Signature Type field ,and you can select HMAC-SHA256 from the list.

Now, we have enhanced the look and feel of the global search on the application screen's top-right corner. The following is a list of the enhancements made to the Global Search:

  • Added a tooltip to help you understand the type of pages that you can search for within the application.

  • Added intuitive icons for each dynamic search result to indicate the application’s section from where the result is being displayed.

  • Added the Clear option to remove the previous search text from the Search field quickly.

  • Added an intuitive display for the no results screen.

     

Platform: Enhanced the Bookmark Menu

Now, we have enhanced the look and feel of the bookmark menu on the application screen's top-right corner.

The following is a list of the enhancements made to the bookmark menu:

  • Added a tooltip to help you understand the types of pages that you can bookmark within the application.

  • Added intuitive icons for each recent bookmark to indicate the application’s section that is bookmarked.

  • Added a Settings icon to access the Bookmark Manager.


Platform: Enhanced the Dashboards Filter Menu

Now, we have enhanced the filter menu in the Dashboards page to combine the sort by and filter by functionalities. You can now view the options to filter by Name, Last modified by me, Last modified, Last opened by me, and the options to sort by ascending or descending, all under the same filter icon.

The following is an illustration of the same.

Platform: Enabled support for external Task Location

Now, you can provide an external link in your Task Location. Previously, you could only link your task to a location within the application. This has now been improved to allow you to link a task to your own application or any other external application.

Business Value

This functionality provides you the capability to link your task to an external application, for example, Jira, etc. If you wish to link your task location to a web page outside the application, then you can do so using this feature.

Platform: Enhanced the Consolidation Processes Toolbar

Now, when you click on any consolidation process such as Standard Journals, Recurring Journals, Dynamic Journals, Non-Controlling Interest, Reclassifications, Eliminations, and Validations, you will see a standard set of menu options for the entire list of processes. The menu comprises:

  • Add

  • Upload

  • Export as Excel

  • Print

  • Apply Filter

The following is an illustration of the standard set of menu options.

Whereas when you select a specific process, you can see the following additional menu options:

  • Edit

  • Copy

  • Delete

The following is an illustration of the set of menu options that appears when a specific process is selected.

Predict: Enhanced the Signals Overview

The Signals Overview page has been enhanced for a tabular and more comprehensive look & feel. The following is a list of the enhancements made to the Overview page:

  • You can now apply filters to the Signals from a drop-down menu at the top of the page. The filters can also be hidden by clicking on the Hide Filters toggle switch.

  • You can now see donut charts of the Total number of GL Combinations processed and the count of GL combinations having High Risk signals, Medium Risk signals, and Low Risk signals.

Note:
The color to denote the Low Risk signals has been modified from yellow to a sea-green shade.
  • Implemented Total Impact and Peak Impact.
    • The Total Impact represents the total dollar impact for all the month-wise signals of a specific GL combination for the selected fiscal year. For a single GL combination, there could be different Signals with different risk categories. The variances of all those Signals based on the upper range and the lower range (for Signals below the lower range) added together give the Total Impact. Total Impact essentially is the total variance of Signals for a fiscal year.

    • The Peak Impact represents the dollar impact for the highest signal available (the one with the highest variance) for a given GL combination for the selected fiscal year.

    • The Total Impact and the Peak Impact are not associated with the Risk Levels.
       For example, for a given GL combination and fiscal year, there could be a medium-risk signal in June having a higher dollar impact than a high-risk signal in December with a lower dollar impact.
       Let us say, the actuals for all the Decembers across the previous years are within the $1000 to $3000 range. In this case, even $5000 could also be a high-risk signal. Whereas, for June, the actuals are usually very high and range from $500,000 to $1000,000. Then, even a medium or low-risk signal that is slightly off the normal range could have a much larger dollar impact than the high risk in December.

    • Comparing Total Impact and Peak Impact:
       When we consider the Total Impact, the presence of multiple low dollar impacts averages out the one or two high dollar impacts. So here is where Peak Impact helps you identify those anomalies that could be a result of typos or fat-finger errors. While Peak Impact captures the fat-finger error cases, the Total Impact captures cases that are not planned well or are planned with conscious changes in planning assumptions, etc.
       For example, in situations where the business has taken a conscious planning change and allocated a higher budget for a GL combination for the selected fiscal year, there is a possibility that all 12 months have low-risk signals, and the total dollar impact is also a decent number, say $80,000. This number could deprioritize a fat-finger error case for a different GL combination in March. The month of March has just one Signal where the numbers range from $6,000 to $10,000 and there is a signal because $60,000 was input instead of $6,000. This will be captured by the Peak Impact and help you analyze these issues.

    • For a specific selection criteria, you must sort both the Peak Impact and the Total Impact to get the topmost relevant signals.
  • Displays Severity of each GL combination in a dedicated column beside the Total and Peak Impacts.

Predict: Implemented the Drill Through functionality

Now, you can right-click on any Signal in the Signal Context screen to view the Drill Through option. This allows you to view and analyze all financial, operational, capital, workforce, and transactional data for that Signal from a single place.

For more information on the drill through functionality, refer to Enhanced Drilling on Dynamic Report Output.

In Practice

  1. Go to the Signal Context page. Right-click on the Signal for which you want to view the Drill Through.

  2. Click Drill Through.

    The Drill Through page is displayed with the all data tabs to display the GL combination.

  1. Click on the reference given under the Doc Ref column, and you will be redirected to the input sheet from where the data for the selected GL combination is stored.

Predict: Enhanced the Email for Dynamic Commentary

You will now be able to see the entire GL combination details in the email you receive when you are tagged in a comment on a Signal. This improvement helps you understand which Signal requires your attention. You can use the GL combination details to search for the Signal comment and take the appropriate action.

Predict: Enhanced the Signals Context Screen to display Signal Variance

Now, when you hover the cursor over a Signal in the Signal Context graph, you will be able to see the Variance between the normal range and the Signal value. Previously, only the signal title and signal value were displayed. Now, the Signal Context Graph shows the following information:

  • The Signal Title

  • The month in which the Signal is observed followed by the Signal value

  • The variance between the normal range and the Signal value

Business Value

This helps you observe the variance value and decide whether you would like to take any action, if required, to normalize the Signal.



Was this article helpful?