November21 Release Notes
  • 22 Minutes to read
  • Dark
    Light
  • PDF

November21 Release Notes

  • Dark
    Light
  • PDF

Article Summary

Structured Planning: Employee Actuals

With this release, you can load employee actuals to WorkForce Planning at the Employee and Account level for reporting Variance Analysis. This will help to significantly reduce the manual work at the start of every month to update forecasts for Employee Actuals vs. Planned Variance.

This feature will allow you to perform the following:

  • Use the Actual Scenario to store WorkForce Actuals data by month/period depending on your tenant configuration

  • Use Data Load Rule imports to load Actuals into WorkForce Planning

  • Delete loaded actuals by month to re-import in case of any mistakes

Now, the File load and Copy-Paste options are available for loading.

Note:
You need to raise a request with customer support to turn on this feature. Once feature is turned on you need to perform the following steps:
Note:
Configure WorkForce Cube (if not configured already)
Note:
Create a Partition for the Selected Year for which you want to load actuals to Workforce

In Practice: To use Reference Scenario in Data Load Rules

  1. Navigate to Maintenance > Data Load Rules.

  2. Click New Data Load Rule available in the side navigation.

  3. Enter a Name for the Data Load Rule.

  4. Select Load Type from the drop-down list. File Load and Copy-Paste are the Load Types supported by the system.

  5. Select Workforce Planning Load Item from the drop-down list.

  6. Select Workforce- Employee Actuals Load Sub Item from the drop-down list.

Note:
The Workforce- Employee Actuals is added to the drop-down list in Load Sub Item.

  1. Now click Next. The Select Sample Input File screen is displayed.

Select Sample Input File Screen:

  1. Fill all the fields with specific data and click Next. The Define Overall Rule Settings screen is displayed.

Define Overall Rule Settings Screen:

  1. Select Reference Scenario from the drop-down list (Optional field).

  2. Select a Load Currency Type. Local Currency (Allocated Currency) and Common Currency are the two types supported.

  3. Select an Aggregation at Time Roll up condition from the drop-down list. None, Sum, First, Last, and Avg are the types supported.

  4. Select a Time Mapping from the list; by default, the value is Across Columns.

  5. Select Yes as the Automatic Cube Refresh value to Yes to avoid manual cube processing.

Note:
In the Defaults section, all the fields are mandatory, so you must select a value for each of them.
  1. Select Include In Data File from the Employee Name drop-down if the Reference Scenario is not selected. If the Reference Scenario is selected, you will see two options in the drop-down list you can choose from: Include In Data File or Select from Reference Scenario.

  2. Select Include in Data File Data File from the Employee Position Description drop-down irrespective of the Reference Scenario is selected or not selected.

  3. Select Include in Data File or Select from Default Value from Employee Type if the Reference Scenario is not selected. When you click Select from Default Value, a drop-down menu is displayed from which you can select the required value. For file load, you should specify the Employee type code and Name.

  4. Select Include in Data File or Select from Default Value from Home Budget Entity if the Reference Scenario is not selected.

  5. Select Include in Data File or Select from Default Value from Position Budget Entity if the Reference Scenario is not selected.

  6. Select include in Data File or Select from Default Value from Account.

Note:
All the financial fields are mandatory. You can either select from a Default value or specify in the load file.
  1. Select Include in Data File or Select from Default Value from Fiscal Year. When you click Select from Default Value from Fiscal Year, you can select the year from the calendar. Or, you can load from the load file.

  2. Select Include in Data File or Select from Default Value from Fiscal Month. When you click Select from Default Value from Fiscal Month, you can select the month from the calendar.

  3. Now, click Next. The Manipulate Input File screen is displayed.

Manipulate Input File Screen:

  1. You need not make any changes to this screen. Click Next to navigate to the Define Data Mappings screen.

Define Data Mappings Screen:

  1. You need to map fields in the Source Column to values available in the Maps To field drop-down lists.

  2. Now, click Next. The Load Data Screen is displayed.

Load Data Screen

  1. You can either choose a new Data file or click Finish.

  2. Once you click Finish, you will either receive a Data Load success message or an exception report.

  3. You can then navigate to Dynamic Reports and build your custom Variance Report.

Load Successful:

To update the existing data for the specified Actual Scenario, the Employee Number, Employee Name, and Position Description should always match. If the load file contains a combination of valid and invalid rows, the loading is progressive. In cases where the load is partially successful, invalid rows will be skipped, and valid rows will be loaded. For specific periods, users can load Actual amounts. The Actual DLR load is Synchronous.

Structured Planning: Clear Data

With this release, Workforce Planning has been introduced in the Clear Data screen to support the deletion of loaded Workforce Actuals. You can see the Workforce option in the Select Area drop-down list.

You can select Time Period as a parameter and delete the Actuals loaded data in the Workforce Clear Data screen. Once you select the Time Periods and click Delete, a job will be scheduled. After completion of the job, a confirmation alert message containing exported file links is displayed.

Note:
Users having access to the Clear Data screen can delete Workforce Actuals data for the selected period, and a copy of the deleted Workforce Actuals data will be exported automatically.

In Practice: To Clear Data for Specified Time Periods

  1. Navigate to Maintenance > Data Load Rules.

  2. Click Clear Data. The Clear Data screen is displayed..

  1. Select Area as Workforce from the drop-down list.

  2. In the Selection Criteria, the Scenario is set to Actual by default.

  3. Select time from the Select Time drop-down menu. You can select multiple time periods.

  4. Click Delete. A confirmation message appears.

  1. Click Confirm. A Job submission message appears

  1. After completion of the job, a Delete confirmation message appears.

Note:
After the deletion of Data, an email notification is sent to the user, and the data file is exported.

Dynamic Planning: ClickOnce Flexible Deployment

With this release, the ClickOnce Spotlight Addins allows you to automatically switch between the Production Spotlight Addins and Sandbox Spotlight Addins with the use of the application URL. This application URL is used for Logging into the application in a specific environment. Previously, to validate this functionality available in the Sandbox environment through ClickOnce, you had to do the following:

  1. Uninstall the existing ClickOnce Addin from the Production environment.
  1. Download and install the new Addin which would validate the Sandbox environment.
  1. Uninstall the new Addin from Sandbox.
  1. Reinstall ClickOnce Addin from Production environment.

The installer page will be deployed on selected hosting environments, so you will have the flexibility of switching from one tenant to another in different environments.

Note:
Post this release of the ClickOnce Spotlight feature, all version updates to the application will be done automatically with the flexibility of switching between Sandbox environment and Production environment.

Business Value

With this change, you can download and install the latest version of this functionality through the automated link specific to a version. This change significantly reduces the time and effort involved in uninstalling and installing Addins while simultaneously switching between different hosting environments.

Spotlight Addins Installer Page

On the Spotlight Addins Screen, you now have the installer links to download the latest version of Spotlight Addins. In the Spotlight Addins screen, you can download and install applications in two ways:

  • Automated, which is a ClickOnce method that will provide you with automated upgrades in the future

  • Manual, which is an InstallShield MSI method where you will have to uninstall the old version to install the new version when an upgrade is released

Furthermore, Addin download links are updated specific to a particular server, and only the users having access to Dynamic Planning will be able to download the Spotlight Add-in version from the installer page.

Note:
The download links for Spotlight Add-in are also updated in the Online Help.

Business Value

With the Installer page updated in the Spotlight Addins screen, you will not have to navigate to the Online Help for future upgrades.

In Practice: To use Installer Links

  1. Go to your profile, click on Manage Your Account.

  2. Select Spotlight Add Ins.

You will see the ClickOnce link for Automatic Installation and the Manual Installation links of Spotlight Add-In upgrades.

Dynamic Planning: Improved Aggregation Process

Effective aggregation techniques provide more information based on related data clusters, such as a company’s revenue or performance. For example, a store may want to look at the sales performance for different regions to aggregate the sales data based on a region.

Aggregation can be applied at any scale to summarize information and make conclusions based on data-rich findings. Data can also be aggregated by date, showing trends for years, quarters, months, etc. These aggregations could be placed in a hierarchy, where you can view the data trends for years, then see the data trends over months for each year.

Aggregation

An aggregation function is a mathematical computation involving a range of values that results in just a single value expressing the significance of the accumulated data it is derived from. Aggregate functions are often used to derive descriptive statistics. For example, an aggregation function groups together the values of multiple rows as input on certain criteria to form a single value of more significant meaning.

Improvement and Business Value

With this release, the Aggregation algorithm has been enhanced to process the data quickly and accurately, saving a lot of time working with large data models. Previously, the Aggregation algorithm used to take a considerable amount of time to process the data. For example, to process all data records in a model with 10 dimensions where 6 were Key Dimensions and 4 were Value Dimensions, the Aggregation algorithm previously took close to 18 minutes. With this release, this Aggregation algorithm will process the same data records in just three and a half minutes. This significantly reduces the time and greatly increases the performance of the process.

The processing time depends on the key and value dimensions, rollup operators, dimensions and levels within each dimension, and the total number of records in the model. This algorithm is much more optimized in computing the rollup values in the model hence users will see that it's taking less time to complete the aggregation.

With the Aggregation algorithm, the overall time it takes to complete the aggregation will be cut down to half. We have tested this aggregation algorithm with various models and in all these tests the aggregation times have improved in some cases.

Performance Improvement Use Cases

Example 1: A model of size 3770.90 MB has 10 dimensions where 6 were Key Dimensions, and 4 were value dimensions. The following table shows some data of the data processed and the time consumed for aggregation.

Key Combinations Processed

23,82,42,81,600

Value Combinations Processed

2,41,280

Data Records Processed

9,54,443

Time Consumed

3 minutes 39 seconds

Previously (i.e. with the old aggregation algorithm), the time consumed for aggregating the same model was 18 minutes 22 seconds.

Example 2: A model of size 18300.4 MB has 9 dimensions where 5 were Key Dimensions, and 4 were value dimensions. The following table shows some data of the data processed and the time consumed for aggregation.

Key Combinations Processed

16,91,63,280

Value Combinations Processed

2,75,200

Data Records Processed

7,50,109

Time Consumed

12 minutes 1 sec

Previously, the time consumed for aggregating the same model was 36 minutes 55 seconds.

Example 3: A model of size 6698.2 MB has 6 dimensions where 3 were Key Dimensions, and 3 were value dimensions. The following table shows some data of the data processed and the time consumed for aggregation.

Key Combinations Processed

52136

Value Combinations Processed

8096

Data Records Processed

52136

Time Consumed

7 minutes 11 seconds

Previously, the time consumed for aggregating the same model was 2 hours 45 minutes 54 seconds.

Note:
The aggregation process supports 12 digits before the decimal and 6 digits after the decimal.

Enable Aggregation Performance Flag

You must contact the customer support team to enable the aggregation performance flag in your application.

Once the aggregation performance flag is enabled in your application, perform the following:

  • Log into your application.

  • Go to the Model Setup screen, and set the ‘Enable Aggregation Performance’ property to ‘Yes.’

Note:
This property is not applicable for direct connect models.

This property can be enabled even for models with Change Data tracking enabled; that is, the aggregation performance improvement can be enabled irrespective of whether the model has Change Data tracking or not.

Note:
In the upcoming releases, we plan to eliminate the above process and make this a default configuration for all applicable models.

If you have multiple models in the application and need to enable aggregation performance for all models, one option is to manually enable it from the Model setup screen. Another option is to request the support team to enable it from the backend.

Once you enable the flag in the Model screen, the Aggregation function that previously used the old algorithm will now run with this algorithm. You can choose which model you want to enable for the same Tenant.

This change enhances the Aggregation Performance and significantly reduces the time previously taken for aggregation.

Note:
This feature is not available for Direct Access to PCR Model and Source type Model.

In Practice: To enable the Enable Performance Flag in SpotlightXL

  1. Open SpotlightXL, go to Model Setup screen.

  2. Now, navigate to the Enable Aggregation Performance property, and select Yes from the drop-down options.

Platform: Added Currency Conversions In Process Flow (Preview)

We are releasing this feature for Preview only! It will be available in the Sandbox environment. We suggest that you test this feature and familiarize yourself with the functionality.

The Currency Conversions option is available as a task type when creating a Process Flow from the Cloud Scheduler > Process Flow screen.

Note:
You must upgrade to Ivy to view the Currency Conversions option in the Task Type drop-down. Reach out to Planful Support to enable Ivy in your application.

Business Value

Until now, the multi-currency reporting was not available for Planful Structured Planning users, and the existing currency conversion process with consolidations was too slow. This resulted in a lag between the data loads and multi-currency reporting.

Adding Currency Conversions in Process Flow makes multi-currency reporting accessible to Structured Planning users. Now, the currency conversion process leverages the Ivy framework, resulting in simplified processing from Local to Common Currency, Interim Currencies, and Reporting currencies.

Best Practice

When currency conversions are added in multiple tasks, it is recommended that the dependencies are configured to process them sequentially so the scenario locking doesn't fail those tasks when executed parallely.

In Practice: Accessing Currency Conversions Task Type

  1. Go to Maintenance > Cloud Scheduler.

  2. Click the Process Flow tab.

  3. Click the Add Process Flow icon.

  4. Click the Tasks tab, and select the Task Type drop-down. You can view the Currency Conversions option added in the drop-down menu. Select Currency Conversions and provide the following information in the respective fields:

- Task Name - This field is mandatory. It is automatically updated when you select a Scenario, Company, and Period. By default, the Scenario + Company + Period is taken as the Task name.

- Scenario - Click the icon adjacent to this field and select a scenario. The scenarios are listed based on your Scenario security settings. This field is mandatory. Currency Conversions allowed for unlocked scenarios only.

- Company - Click the icon adjacent to this field. You can select a roll-up or leaf member. When you select a roll-up member, the Currency Conversions process happens to all the leaf members under the roll-up member. This field is mandatory.

- Period - Select a period from the list. You can run the Currency Conversions process for the selected period. The options available are Current Month, Previous Month, Current Financial Year, and Custom Period. When you select Custom Period, you can provide the From and To dates from the respective date pickers. This field is mandatory.

Note:
When you select any non-actual scenario, the Period, From, and To fields become read-only fields.

- Dependencies - If the Currency Conversions task is dependent on any other process, select that from here. This field is optional.

- Email Recipients - You can provide the email address of the users to whom you want to send an email notification when the Currency Conversions are processed based on the Process Flow schedule. This field is optional.

While executing Currency Conversions, the Scenario is locked to avoid conflicts. Once the Currency Conversions are processed, you can find the process details in the Job Manager, Detail log.

After completing the Currency Conversions, an email notification is sent to the users configured in the Process Flow against the task. An email is sent every time the task is executed for the Currency Conversions.

Platform: Excel Exports Security

In Security Administration > Tenant Security Settings, a security option, Enable Protect Sheet/Workbook in Excel Exports, is added. By default, this setting is disabled, and when you download or export any excel sheet from the application, you can start editing them directly. When you select this security setting, all the excel files exported or downloaded from the application are in Protected Sheet (read-only) mode. As a result, you cannot edit the contents of the document directly. You have to unprotect the sheet manually to edit the contents in it.

Business Value

Till now, there were inconsistencies in the behavior of the excel sheets that were downloaded or exported from the application. Most of the sheets downloaded from the application were in Protected Sheet mode by default and required an additional effort to unprotect the sheets every time they were downloaded.

Introducing the Enable Protect Sheet/Workbook in Excel Exports option makes it convenient for the users to enable or disable the Protect Sheet mode as per their convenience.

In Practice: Accessing the Enable Protect Sheet/Workbook in Excel Exports option

  1. Go to the Maintenance > Security Administration section.

  2. In the Tenant Security Settings section, you can view the Enable Protect Sheet/Workbook in the Excel Exports option.

Note:
The Enable Protect Sheet/Workbook setting is not honored in Standard Reports and CSV exports for Clear Data.

Platform: Enhancements to Segment Retrieve APIs

Now, when you use any segment retrieve APIs (from Segment1 to Segment8), the API response code will include the IDs of all the Leaf, Roll-Up, and Parent Members. When you use any segment APIs, the API retrieves the data related to that segment. When a client application invokes the Segment1_Retrieve call, it passes the segment1 filter criteria in the collection of Segment1Filter objects to filter the data rows. The retrieved data will now have the IDs of all the Leaf, Roll-Up, and Parent Members.

Following is a sample response code for Segment1_Retrieve API.

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <soap:Body>
        <Segment1_RetrieveWithLoginResponse xmlns="http://www.HostAnalytics.com/API/SOAP/StateFree/Common/2009/03/19">
            <Segment1_RetrieveWithLoginResult>
                <Segment1>
                    <MemberId>1</MemberId>
                    <Code />
                    <Name>Account Main</Name>
                    <RollupOperator>
                        <Sign>+</Sign>
                    </RollupOperator>
                    <MemberType>ConsolMember</MemberType>
                    <ActiveStatus>Active</ActiveStatus>
                    <AccountType>Balance</AccountType>
                    <AccountGroup>Asset</AccountGroup>
                    <NormalDataInput>YTD</NormalDataInput>
                    <CurrencyType>
                        <Code />
                        <Name />
                    </CurrencyType>
                    <CreditDebit>Debit</CreditDebit>
                    <Variance>Positive</Variance>
                    <ParentCode />
                    <TrialBalanceAccount>true</TrialBalanceAccount>
                    <ReportCategoryName />
                    <RollupLevels />
                    <Attributes />
                    <Sort>0</Sort>
                    <ParentMemberId>0</ParentMemberId>
                    <ParentMemberCode />
                    <ParentMemberName />
                    <ParentMemberLabel />
                </Segment1>
                <Segment1>
                    <MemberId>6981</MemberId>
                    <Code>Consolidation Hierarchy</Code>
                    <Name />
                    <RollupOperator>
                        <Sign>+</Sign>
                    </RollupOperator>
                    <MemberType>ConsolMember</MemberType>
                    <ActiveStatus>Active</ActiveStatus>
                    <AccountType>Balance</AccountType>
                    <AccountGroup>Asset</AccountGroup>
                    <NormalDataInput>YTD</NormalDataInput>
                    <CurrencyType>
                        <Code />
                        <Name />
                    </CurrencyType>
                    <CreditDebit>Debit</CreditDebit>
                    <Variance>Positive</Variance>
                    <ParentCode>Account Main</ParentCode>
                    <TrialBalanceAccount>true</TrialBalanceAccount>
                    <ReportCategoryName />
                    <RollupLevels>
                        <RollupLevel>
                            <Id>1</Id>
                            <Code />
                            <Name>Account Main</Name>
                            <Level>1</Level>
                        </RollupLevel>
                    </RollupLevels>
                    <Attributes />
                    <Sort>1</Sort>
                    <ParentMemberId>1</ParentMemberId>
                    <ParentMemberCode />
                    <ParentMemberName>Account Main</ParentMemberName>
                    <ParentMemberLabel>Account Main</ParentMemberLabel>
                </Segment1>

Note:
The Segment1_Retrieve API response contains the “ParentMemberId” tag, the same as the “ParentId” tag available in the response of other segment retrieve APIs.

Platform: Trintech Adra Integration

Planful and Trintech together help customers achieve faster, more confident planning and decision-making with a complete integrated solution that accelerates the end-to-end FP&A, consolidation, and accounting close processes in a smooth-flowing environment. This global partnership empowers accounting and finance professionals by automating one of the most critical validation workflows.

Business Value

With Planful and Adra by Trintech, a best-in-class financial close management solution, users gain control and visibility over manual close and consolidation, letting them work faster and address high-value, high-priority business issues. With this integration, customers can:

  • Mitigate Reporting Risks

  • Focus on what matters the most

  • Scale organizational insights

In Practice: Accessing the Trintech Adra Option

To view the Adra option, you have to reach out to Planful’s Account Manager. When this option is enabled from the backend, users with the Super Admin role can automatically view the Adra option. For other users, Super Admin has to enable the feature from the Navigation Role page.

When you log in to the application, an Adra option is added in the side menu of the application. When you click this option, you can easily navigate to the Adra Application. As both applications are SSO integrated, you are navigated to your Adra Balancer account. You will be logged out of the Planful application when you log out from the Adra application.

When you access the Adra application from the Planful application, the log-in and log-out information is recorded in the Audit logs.

Now, you can navigate to the source Workforce Planning template from the Drill Through report in Dynamic Reports and Dashboards. You can click on the hyperlink available in the Home Entity column of any record in the Workforce Planning tab in a Drill Through report to open the corresponding Workforce Planning source document template.

The source template opens in a new browser window. The user security for the budget entity and the navigation access permissions to the template are honored while opening the template.

Business Value

This functionality allows you to directly navigate to the workforce planning source template of the record from the Drill Through report. It significantly reduces the effort of navigating to the source template manually. You can instantly view the source data of the record and compare the values precisely.

In Practice: Accessing Workforce Planning Source template from Dynamic Reports

  1. From the left menu, click the Dynamic Reports option.

  2. From the File Cabinet dropdown menu, select the required folder. The reports related to the selected folder are displayed.

  3. Click the required report to view the report details.

  4. Double click on the cell for which you want to view the related Drill Through report.

  5. Click the Workforce Planning tab.

  6. Go to the Home Entity column and click the hyperlink related to any record.

  7. The Source template for the record opens in a new window with the same scenario and budget entity combination as available in the Drill Through for that record.

In Practice: Accessing Workforce Planning Source template from Dynamic Reports

  1. From the left menu, select the Dashboards option.

  2. Click on any dashboard to view the related chart details.

  3. Right click on any parameter of any chart and click Drill Through to view the related Drill Through report.

  4. Click the Workforce Planning tab.

  5. Go to the Home Entity column and click the hyperlink related to any record.

  6. The Source template for the record opens in a new window with the same scenario and budget entity combination as available in the Drill Through for that record.

Platform: Enhanced the Approval Role page

Now, we have further enhanced the look and feel of the Add/Edit Approval Role pages under Maintenance > Admin > User & Role Management > Approval Role. The Approval Actions field is a drop-down instead of radio buttons. Also, a Save button has been added on the top-right for ease of access.

Platform: System Status Information Now Available within the Planful Application

Now, we have added System Status information in the User Menu > Manage Your Account > General page. This section contains links to the following information:

  • Current system status

  • Maintenance windows

  • Historical application availability

  • Service level agreements

Business Value

The System Status links redirect users to a link available on the Planful website. This makes it easier for the customers to locate and refer to this information.

Platform: Added a Tooltip to indicate Entity Status in Entity Mapping

Now, we have enhanced the indication of a Locked entity and Entity with Data by adding a tooltip and a status column. This enhancement is available in the Template Setup → Mappings → Entity Mapping and Template Setup → Setup → Template Setup → Mappings → Entity Mapping pages of Maintenance > Planning Templates. With this change, a status column along with a tooltip appears next to the mapped entities.

  • Locked Entity: If an entity is currently in use by another user, the status column and the tooltip will indicate the same. You will not be able to unmap it.

  • Entity with Data: If an entity contains data, the status column and the tooltip will display the same message, and you will be informed about it in case you try to unmap the entity.

The following is an illustration of the same.

Platform: Changed Visual Appearance of a Locked Scenario

Now, we have enhanced the visual appearance of a Locked Scenario in the Planning Control Panel page of Structured Planning > Operating Budget. With this change, any locked scenario appears in a maroon font color along with a lock icon.

The following is an illustration of the same.

Platform: Heads Up! Customized Banner Right Mask Feature Deprecation

With the December 21 release, the feature using which you could customize the color of the right mask in the top banner of the application will no longer be available. The default Planful right mask will be displayed irrespective of the profile chosen in Maintenance > Admin > Customize Branding > Personalize Banner.

Predict: Automated Model Training

Now, Model Training will automatically run every quarter according to the fiscal year set in your system. Model Training is automated to run on the second Sunday of every quarter. You cannot change the frequency of the automated model training. You can still manually run model training anytime you update the actuals.

If you are a new Signals user, you have to run Model Training for the first time manually. After that, the automated schedule will start working. If for any reason, the automated model training fails, you will get a notification, and you should be able to run the Model Training manually.

Business Value

This functionality provides you the flexibility and ease of keeping the Signals report up to date, without manually running model training every quarter.

Predict: Ignore Dimensions when Model Training

You can now ignore one or two dimensions from Model Training if those dimensions limit the historical actuals data to less than two years. This will allow you to ignore dimensions other than Scenario, Time, Measures, and Reporting. The ignored dimension will not be listed as a dimension filter option in the Signals Overview screen.

You can also undo the Ignore Dimension option. You can completely revert and not ignore any dimensions or unselect only the one you do not want to ignore.

You must always manually run Model Training once you have chosen to ignore dimensions.

Note:
The Dimension Security feature will not work for the dimensions that are selected to be ignored.

Business Value

This functionality allows you to generate Signals for your data with some dimensions that restrict the timeframe to be longer than a year. For example, the Project dimension usually never has more than one year of data as companies run short-term projects. Therefore, the presence of dimensions such as Project restricts the system to have more than one year of data at the leaf level. Examples of other such dimensions could be Product_Year, Claim_Year, etc.

You can ignore such dimensions during Model Training so that the Predict AI engine has enough data to train the model.

In Practice

  1. Go to Maintenance > Admin > Configuration Tasks. The Configuration Tasks List page appears.

  2. Under Planful: Predict, click Predict: Signals. The Predict: Signals page appears.

  3. Choose to Ignore Dimensions from the given drop-down.

  4. Start Model Training on the remaining dimensions of the selected scenarios.


Was this article helpful?