Data Integration Options
  • 98 Minutes to read
  • Dark
    Light
  • PDF

Data Integration Options

  • Dark
    Light
  • PDF

Article Summary

Data Integration Options at-a-Glance

Actual Data Template

Load actual data for the Consolidation application in a template format.

Translations Setup

Load data that needs to be translated. For example, if your organization operates with multiple companies with disparate general ledgers using different schemas of chart of accounts, data translations or mappings are required for the data to be loaded into the Planful suite.

Transaction Details

Load transaction or "daily" data, which might include translations.

Data Load Rules

Load all types of Data. Planful home grown easy-to-use wizard steps you through the process to get your data into Planful.

Data Calculation Exceptions

Instruct Planful, during the data load process, to take default values for missing data for periods where no entries are posted.

Additional Data Integration Topics

Clear Data

Delete financial data posted by the Consolidation process or loaded using Data Load Rules.

Export Data

Use the Export Data and Export Data Setup pages to export data from Planful to Excel, text, or CSV (comma delimited value) file formats.

Actual Data Templates

Overview

Using Actual Data Templates, data is loaded into the Planful suite via data input templates in which data is manually entered. Administrative users can design and publish the data input templates, which are then accessed by end users who input data into the templates.

This approach is recommended when the data needs to be loaded into Planful suite from several disparate General Ledgers with significant dissimilarities in the chart of accounts and the data translation mappings from the subsidiary general ledger data to the common reporting chart of accounts do not exist or would be hard to create.

Loading Data with Actual Data Templates

To load data using Actual Data Templates:

  1. Navigate to Maintenance > DLR >Actual Data Templates.

  2. Click Add.

  3. Enter a template name.

  4. Select template parameters. These parameters are common to all new templates unlike the COA Segment members which are populated based on the segments defined on the Configuration Tasks page.

  • The scenario is fixed as you can only load data to the Actual scenario.

  • Select the reporting currency; which means you will load data in local or common currency.

  • Select Yes for Apply Credit Debit if the data contains credit account numbers in positive form. For Debit Accounts, accounts loaded are displayed with the correct signage in reports. For example, if you want to see a negative number for a debit account, load a negative number for that accounts. For Credit Accounts, the number loaded is negated and stored that way. So, if 100$ is loaded, the number is stored in Planful as -100 (100x-1). When you run the Verify Data report, the number is displayed as -100.

    When you run a Dynamic Report, the number is again multiplied by -1 making the resulting value displayed in the Dynamic Report $100. If you select No, for Debit Accounts, the data is displayed as is in reports. For example, if you want to see a negative number for a debit account, load a negative number for that account. However, if you select Yes, for Credit Accounts, and load 100$, 100$ will be shown in Verify Data, but in Dynamic Reports, -100$ will be shown.

  • For Data Type, select MTD to allow the opening balance and net change accounts for the data load period for all accounts (balance and flow), select YTD to allow a year to date amount for each period for all accounts, or select Account Based to allow a mixed format with YTD amount for balance accounts and net changes for flow accounts.

  • For Enable Opening Balance, select Yes to include an opening balance column in the Data Load interface.

  • Display Line Code - Select Yes to display the code column details.

  1. Select COA Segment members that you want included in the ETL (extract, transform, and load) data file/ data load definition.
  2. Click Save and return to the Actual Data Template List page.
  3. Ensure the template you just added is selected and click Template Setup.
  4. To design the template with a structure (data lines), you have several options:
  • Insert line types. L lines allow data entry or calculations to be performed for the row. Reference and destination accounts may be applied to a line. Change the line type to a Header (H) or Calculated (C) line. C lines allow calculations to be defined for a row within the template design, preventing budget users from modifying the formula. Destination accounts may be applied to a calc line.

    H lines allow you to format sections or provide additional spacing.

  • Populate the template line by mapping them to segment combinations. Select Mappings, Destination Account. Destination accounts are used for saving the budget or forecast data entered or calculated in a selected budget template against a General Ledger account. These lines display with a DA.

  • Populate the template line by referencing an account from another template. Select Mappings, Reference Account to look up of data from another template. For example, payroll accounts may be looked up from the Workforce template. These lines display with an RA.

  • Select Mappings, Reference Cube to open the Rule page to look up data from the cube. For example, you can look up total revenue for all sales departments within a legal entity. These lines display with an RC.

  • Mass load data by selecting Mappings, Mass Load. The Mass Load page is enabled where you can select members to load based on dimension member parameters you select. These lines are also labeled with an L.

  1. Set the format for the template; number, currency or percent.

Using an Actual Data Load Template to Load Data to Consolidation Data

In the example below, an Actual Data Load template is added. On the Template Setup page, the structure is defined.

In the Consolidation Control Panel. company 1010-Corporate is selected. The Actual Data Load process is displayed.

Note:
Actual Data Load processes are only available in Decentralized mode.

The Days to Close Books template is selected for input.

The Days to Close Books template is displayed so that Consolidation end-users can enter data.

Description of Fields on the Actual Data Template Page

Setup a data load template. Select mapping options and design the layout or structure of the template.

Adjust column widths by dragging the lines (similar to an Excel spreadsheet). Once saved, the column widths are saved to the template.

Column widths are not user specific.

Selections Available Under Mappings

Account Mapping - Maps template rows to segments. Select one of the following options:

Destination Account - open the Destination Account Mapping page to map template rows to segment combinations.

Note:
You can map multiple template rows or reference multiple accounts at once.

Reference Account - open the Reference Account Mapping page to look up of data from another template. For example, payroll accounts may be looked up from the Workforce template.

Reference Cube - open the Rule page to look up data from the cube. For example, you can look up total revenue for all sales departments within a Legal entity.

Mass Load - Insert rows and configure the account mapping within the template.

Selections Available Under More

Account Reports -

  • Destination Report - open a dialog page to view each line that is identified as a destination account and print or download the report in Excel format.
  • Reference Report - open a dialog page where each line that is identified as a reference account is displayed. You may print or download the report in Excel format.
  • Show Rule - open the Rule dialog page to view associated multi-dimensional expressions.

Line Type list box - Set the line type of each line in the template by selecting the appropriate type. Select from the following options:

  • LINE (L) - Allow data entry or calculations to be performed for the row. Reference and destination accounts may be applied to a
  • line.
  • CALC (C) - Allow calculations to be defined for a row within the template design, preventing budget users from modifying the formula. Destination accounts may be applied to a calc line.
  • HEADER (H) - Format sections or provide additional spacing within a template.

Format - Apply number, currency, or percent formatting to rows.

Decimals - Apply up to 6 decimals per row.

Fx -

  • Summary information for the cell.
  • Calculation associated with the cell.
  • Result of the equation or calculation associated with the cell.
  • An example is shown below.

Translations

Overview

Translations provide a mechanism for you to map the data from your local chart of accounts of the reporting site (subsidiary or a partner company) with the common reporting chart of accounts used in Planful .

If your entities do not use a Common Chart of Accounts, you must provide data load files with your native chart of accounts. A Translation will be required to map your native chart of accounts used in your source data load file against the Common Chart of Accounts.

Translation Models include;

  1. Translating all native Source Chart of Accounts to the Common Chart of Accounts.
  2. Translating a single source segment to the Common Chart of Accounts.
  3. Translating source segments to Common Chart of Account Segments with Multiple varying Translations.

With Translation models of type 1 & 2, Data load definitions use a single Translation file for mapping. With the Translation model type 3, Data load definitions use multiple Translations to make up the complete COA definition.

Access the Translations Setup page by navigating to Maintenance > Data Integration > Translations Setup.

Description of Fields on the Translations Setup Page

Add - Add a translation and open the Add Translation page to map source and target segments. 

Translation Lines - Run the Translation Lines report. Once complete you will receive a notice. Download the report and open in Excel.

You can also run the Exception Report from here, which displays missing segments in red as shown below.

Source Segments - The segments you are translating data from.

Target Segments - The segments you are translating data to.

Data Load Rules - The associated Data Load Rule where the translation is used.

Category - The type of segment; Financial, Currency, etc.

How to Add a Translation

  1. Navigate to Maintenance > DLR >Translations.

  2. On the Translation List page click Add.

  3. On the Translation Add page enter a code to identify the translation.

  4. Enter a name for the translations and pick the category from which you will be translating segments. In the example below, FinancialSegments is selected.

  5. For each source segment you select enter the value of the target segment. In the example below, source column 1 in your system maps to the Company segment in Planful.

  6. Select the Enable Description checkbox if you want to add a description column for the selected Source Segments. The Description column will be visible when you drill through in a Dynamic Report with Translations data. You can use the HideDescription option available in the Translation Lines toolbar to hide the Description column.

  7. Save and navigate to Maintenance > DLR > Data Load Rules.

  8. Add a new Data Load Rule to load Segment members.

  9. On the Define Overall Rule Settings page of the Data Load Rule definition, you will select the translation defined on the translation page as shown below.

Simplified Translations Lines Option

For translations in data loads, without this option, you have to setup translation line mappings for all segments; even those not being translated. The Simplified Translations Lines feature eliminates the need to define, map, and maintain translations for segments not translated.

In Practice

To better explain this feature, two examples are provided below; one shows how the functionality works when you are not using Simplified Translations Lines and when you are using this functionality.

Without Translation Lines Example

In the translation lines example below, the Account segment is the only segment being translated. Department and Project segments both have a one to one mapping; meaning that no translation is required.

With Simplified Translation Lines

Using this feature, translation lines for the Account segment ONLY need to be maintained as shown below:

There is some minor setup required to take advantage of this feature.

  1. When creating your Data Load Rule, select the No Translation for Unmapped Target Segments checkbox (shown in the image below) from the Define Overall Rule Settings step in the Data Load Rule.
  2. Complete the Define Data Mappings step in the Data Load Rule to map the source columns in the file load to the target segments in Planful as they will no longer be translated. There are no changes required to the load file or the existing loading process to use simplified translation lines.

Using Wildcards to Define Complex Translations

The following table provides a list of wildcard characters(used to define complex Translations) allowed when defining Translations along with a description of each

Wildcard

Character

Usage Pattern

Action

Description

%

123%

%123

Allow

Allow

All source segments beginning with 123.

All source segments ending with 123.

%123%

Allow

All source segments containing 123.

123%%

Deny

More than one ‘%’ not allowed in sequence.

?

123?

?123

?123?

Allow

Allow

Allow

All 4 digit source segments beginning with 123.

All 4 digit source segments ending with 123.

A 5 digit source segments containing 123 in the middle.

123??

Allow

Multiple ‘?’ are allowed in sequence. All 5 digit source segments beginning with 123.

~

100~200

~300

500~

200~100

Allow

Deny

Deny

Deny

All source values from 100 to 200 (both numbers inclusive).

‘~’ is not allowed as the first value in a source definition.

‘~’ is not allowed as the last value in a source definition.

To value must be > than From value.

100~200~300

Deny

Multiple ‘~’ not allowed in a single source definition.

100~~200

Deny

Multiple ‘~’ not allowed in a sequence.

100~??

Deny

‘~’ cannot be used with any other wild card character.

%~100

Deny

‘~’ cannot be used with any other wild card character.

%

100%? or 100?

Deny

%

%100?Allow

?10%1Allow

'%’ & ‘?’ cannot be used In a sequence.

All source segments containing 123 in the middle and one character after.

All source segments containing 10 in the middle and one character before and ending in 1.

If an incorrect combination of wildcard characters is used in Translation Lines, exceptions are generated when you save.

The precedence of wildcard characters for applying Translation Rules is %, ~, and specific\exact character match. This means the system will first apply % to all applicable Translations, followed by ~, and exact character match.

If an incorrect combination of wildcard characters is used in Translation Lines, exceptions are generated when you save.

The precedence of wildcard characters for applying Translation Rules is %, ~, and specific\exact character match. This means the system will first apply % to all applicable Translations, followed by ~, and exact character match. Example: 1

Translation lines are defined with wildcard characters as follows.

Source Segment

Target Segment

10%

1010

1000 ~ 1999

2010

10‌?

3010

1050

4010

In data loads, the source segment is automatically mapped based on the Translations. Each time the Translation Rule is applied, the previous values for Segment Mapping are updated with the latest values.

Source

Segment

%

Applied

~ Applied

Applied

Exact Match

Applied

Final Target Segment

Mapping

100

1010


3010


3010

1000

1010

2010



2010

10000

1010




1010

105

1010


3010


3010

1004

1010

2010



2010

1050

Example 2:

1010

2010


4010

4010

Translation lines defined with wildcard character '?':

Source Segment

Sub Account

Target Segment

101

2001

1010

1??

???

2010

In data loads, the source segment is automatically mapped based on the Translations. In this case, because line #1 is more specific than line #2, Source Segment 101 is mapped to 1010 instead of 2010.

How to Copy and Paste Translation Lines

In Excel, select the line(s) you want to paste in Planful and copy as shown below.

In Planful, add the number of lines you want to paste on the Translation Lines page.

Select the first cell in the first line as shown below.

Perform a CTRL+V. Notice there are now two lines with account 5555; the original line and the one just pasted.

Best Practices - Translations

  • Do not use blank spaces and special characters in the mapping.
  • When creating multiple translations for a data load, preserve the order of the source segments columns across different translations.
  • All Common Chart of Account segment members used in the Translation should exist in the application. If any segment members are missing, an exception report will be generated highlighting all missing segments.

Export Data & Export Data Setup

Overview

Use the Export Data and Export Data Setup pages to export data from Planful to Excel, text, or CSV (comma delimited value) file formats. Navigate to Maintenance > DLR > Export Data.

Export Data Setup Description of Fields

  • File Type - Select the output type: Excel (.xls, .xlsx), text (.txt), or CSV (comma delimited value).
  • First Row has Column Names - This checkbox appears when you a select .xls, .xlsx, or csv file type. Specify whether you want the first row to contain the column name or not.
  • Data Format - Specify whether the periods should be present for Columns or Rows.
  • Add - Insert a new column.
  • Column - Displays the default and possible columns that need to be present in the export data. Account, Company, Department, ICSegment, and Amount are default items. If you select the data format as Periods on rows, the Fiscal Year and Fiscal Month are mandatory fields.
  • Header Name - Specify a header name for the inserted column. If you want a blank header name in the export file, enter #Blank#.
  • Value - Specify the period for Fiscal Year, Fiscal Month, and Additional Column. For Fiscal Year, specify YY/YYYY/Century. For Fiscal Month, specify MM/Month name (first 3 characters)/Month name (full name).
  • Active - Allow or do not allow the field in the data export. By default, Account, Company, ICSegment, Department, Amount, and Scenario are selected.

Once all fields above are complete, click Save & Next.

Export Data Description of Fields

  • Scenario - Select the scenario from which that data will be exported.
  • Start Month - Select the month and year from when you want to export the data for the selected scenario. This field is mandatory when the Actual scenario is selected.
  • End Month- Select the month and year till when you want to export the data for the selected scenario. This field is mandatory when the Actual scenario is selected.
    Note:
    For other scenarios, if you don't select the date range, the complete scenario data will be exported.
  • Measure- Select a measure, which reflects the data according to your selection in the data export:
    1. MTD - Export data based on a monthly measure.
    2. YTD - Export data based on a yearly measure.
    3. Account Based- Export data based on a specific account.
  • Company Hierarchy - This hierarchy displays the Company Main and all the members associated with it. Select the segment from which you want to export data. You can select the members in the hierarchy at any level and only the selected member details are exported. If you select a roll-up member, all the corresponding leaf members' information in the hierarchy under that roll-up member is also exported.
  • Reporting Hierarchy - Displays the Reporting hierarchy and all the members associated with it. Select the segment from which you want to export data. You can select the members in the hierarchy at any level and only the selected member details are exported. If you select a roll-up member, all the corresponding leaf members' information in the hierarchy under that roll-up member is also exported.
  • Common Currency - Applies common currency (CC) to the data. If you select this option, all the dimensions with a CC are taken into consideration.
  • Local Currency - Applies local currency (LC) to the data. If you select this option, all the dimensions with a LC are taken into consideration.
  • Select Report Ids - Enable the Reporting Hierarchy to select a particular Reporting Dimension instead of considering all the consolidated dimensions.

Transactions

Overview

You might use this functionality to load "daily" transactional data in addition to Data Load Rules. The source columns are associated with the translations defined. Access the Transaction Details page by navigating to Maintenance > DLR > Transaction Details.The maximum number of transaction column attributes is 60.

Description of Fields on the Transaction Detail Page

Amount Columns

Specify how many amount columns are required in the Data Load Rules on the Define Overall Rule Settings page shown below.

  • A Single Value creates one amount column.
  • A Debit amount and credit amount creates two columns, one for debits and one for credits.

Source Data

Select the location of the source data. If the source data resides in the On Source segments, select the translation to apply to the transaction as the Translation Data Load field is available for selection as shown below.

Translation Data Load

Organizations that operate multiple companies with disparate general ledgers using different schemas of charts of accounts require data translations or mappings for the data to be loaded into the Planful suite.

Select the translation you would like to apply to the transaction, which copies the structure from the Data Load Rules that you select here (if you use Translations as the source).

Remember, a translation, is a way of translating or rendering data. For example, one might translate data from a local currency to a common currency.

Attribute Type

The Common COA (Chart of Accounts) segment defaults are displayed based on the setup on the Define Financial Segments page accessed by navigating to Maintenance > Admin > Configuration Tasks.

Attribute Name

Enter the name of the attribute based on the segments defined for the Common COA and the attributes setup for those segments. For example, some Common COA names are; Company, Account, and Project. 

Attribute data type

varchar - Select this option to associate a data type with a variable character field. Variable character fields have a user-defined length and can hold letters and numbers.

URL - Select this option to load a reference URL. You can click the URL hyperlink when you view the Transactions Drill-Through reports.

Attribute length

Enter the maximum amount of characters for the field.

Loading Transaction Data using a Data Load Rule

You can load Transaction Data using a Data Load Rule. The following documentation provides information on options when you select the Data item and the Transaction Data sub item.

Clear all data prior to new data load

If you enable Clear all data prior to new data load when the transactional data is loaded via Data Load Rules (see image below), the system will clear all the data for Actuals and the distinct time periods available in the new data load file.

If you disable Clear all data prior to new data load, the system will clear pre-existing transactional data with following conditions:

  • Scenario will be Actuals.
  • The distinct time periods available in the new data load file.
  • The segment combinations available in the new data load file.

Asian Characters in Transaction Data Loads

You can load Transaction data with Chinese, Japanese, Korean, and Malay characters using a Translation or GL Data Data Load Rule. Once loaded, you can view the data in the Dynamic Report Transaction Report by drilling through to transactional data.

You must contact Planful Support to enable this feature. Once enabled by Support, complete the steps below.

If you try to run Data Load Rules to load Asian characters prior to carrying out the steps below, the Asian characters will display as question marks.

Complete the following steps:

  1. Contact the Planful Support team to enable this feature.

  2. Access the Transaction Details Setup page by navigating to Maintenance > DLR > Transaction Details.

  3. If the User defined attributes are not present in the existing Transaction Details Setup, you must add them using the ‘+’ icon before saving the Transaction Details Setup. Please note that as per the existing behavior, if any changes are made to the Transaction Details Setup screen before saving, then the earlier Transactions loaded are lost.

  4. Asian characters can be loaded to user-defined attribute columns only. You cannot load data with Asian characters to segment members. To add a new user defined attribute, click the add ‘+’ icon and select Text for Attribute Data Type. Select the URL to enter a web hyperlink.

  5. For Attribute Length, enter up to a maximum of 4000 characters.

  6. Click Save.

  7. Access the Data Load Rule and load Transaction Data for the GL Data/Translations Data that has been loaded. If using the File Load type, ensure that when the Excel load file is saved, the Save As format option is either CSV UTF-8 (Comma delimited) (*.csv) or Unicode Text (*.txt).

  8. Create a Dynamic Report to view the Transaction Report. The image below displays a Dynamic Report (with drill through enabled for Transactions). To view transactional data with Asian characters, double-click and the Transaction Report displays (also show below).

  9. If you are an existing customer and require a migration of the existing Transaction data, then the transaction data needs to be reloaded.

In the image (below) you can see how the attribute columns map between the Transaction Data and Transaction Report.

Make sure you enable Drill Through to Transactions for the Dynamic Report. To do so, click Format and select Transactions as shown below.

Translation Data Load

To load transaction data with Asian characters using translations, set up a Translations data load rule. For information on how to do so, see the Transaction Details: Loading Transaction Data with Translations topic in the Admin Guide to Loading Data to Planful.

Data Load Rules

Overview

The Planful suite includes built-in data integration functionality, which allows you to import your data.

A Data Load Rule (DLR) tells the Planful system how to handle data values in the data source during a data load. You can create data load rules to load files, copy and paste data, and use web services to load segment hierarchies, attributes, attribute hierarchies, entity hierarchies, users, Workforce data, currency exchanges rate, and other data. There are several steps to create a data load rule. These steps vary based on source and target data options.

The first page you will see when you access the Data Load Rules page is the Data Load Rule List. Click the New Data Load Rule page and the Data Integration wizard will step you through the remainder of the process.

Types of Data Loads

The following methods for data import are available:

Spreadsheet Based Data Load

Using this approach, your source data is first exported to Microsoft Excel and then copied on to the Planful browser-based spreadsheet interface through the copy/paste operation.

This approach is user-friendly and most commonly used, however, when the data volume is huge (i.e. above 60,000 records per data load), alternative methods are recommended.

Note:
When using Excel Files ensure all numerical data is formatted as Number. Data formatted as Accounting and other formats are not supported.

File Based Data Load

Using this approach, your source data is first exported into a text /excel/csv file format. Then, through load mapping settings, load file setup is configured in the application. Lastly, the source file is set up to upload to the Planful application database.

If you set up a File based data load rule and later on want to load data via Web Services, instead of re-creating the entire DLR (Data Load Rule), you can simply edit it and change the Load Type to Web Services. This flexibility saves time and ensures consistency in loading data when the same DLR is used. However, it is very important and a best practice to complete all the steps in the DLR to effectively change the DLR type. Otherwise, the DLR will be rendered unusable.

You can switch between the types of DLRs:

  1. File Load

  2. Web Services

  3. Copy – Paste

  4. NetSuite Connect

For example, to change the load type from File to Web Services, click the DLR. Change the Load Type to Web Services and complete the remaining steps of Select Sample Input File, Define Overall Rule Settings, Manipulate Input File, Define Data mappings, and Load Data for the changes to take effect.

Data Load Rules (DLRs) are automatically sorted alphanumerically (Screenshot A). Sorting alphanumerically allows you to more quickly identify a Data Load Rule without scrolling through the lists.

If you sort on any column on the Data Load Rule List page, navigate away, and then return to the same page, the sorting is retained. For example in Screenshot B (below), the Created By column is selected and the DLRs are sorted on that basis and will continue to display in this sort order unless sorted another column is sorted.

Screenshot A

Screenshot B

Note:
The orange carrot displayed in any of the column denotes that the user has sorted the DLR list on that column.

Template Based Data Load

Using this approach, data is loaded into the Planful suite via data input templates in which data is manually entered. Administrative users can design and publish the data input templates, which are then accessed by end users who input data into the templates.

This approach is recommended when the data needs to be loaded into Planful suite from several disparate General Ledgers with significant dissimilarities in the chart of accounts and the data translation mappings from the subsidiary general ledger data to the common reporting chart of accounts do not exist or would be hard to create.

Web/Cloud Services

See the Admin Guide to Loading Data to Planful

High Level Steps to Create a Copy-Paste DLR to Load Data

  1. Access the Data Load Rule page by navigating to Maintenance > DLR > Data Load Rules.

  2. Click the New Data Load Rule tab.

  3. Select Copy - Paste as the Load Type on the New Data Load Rule page.

  4. Copy and paste data from your source file to the spreadsheet on the Select Sample Input File page.

  5. Select parameters, defaults, and segment information to further define the way you want to load data using the Define Overall RuleSettings page.

  6. Transform/manipulate the data you are loading to best suit your needs on the Manipulate Input File page.

  7. Map source data columns to targets using the Define Data Mappings page.

  8. Perform the load using the Load Data page.

Note:
You can paste up to a maximum of 1000 rows and 70 columns.

High Level Steps Create a File DLR to Load Data

  1. Select File as the Load Type on the New Data Load Rule page.

  2. Load the source file using the Select Sample Input File page.

  3. Select parameters, defaults, and segment information to further define the way you want to load data using the Define Overall RuleSettings page.

  4. Transform/manipulate the data you are loading to best suit your needs on the Manipulate Input File page.

  5. Map source data columns to targets using the Define Data Mappings page.

  6. Perform the load using the Load Data page.

Note:
To load a Text file, save the file as a txt (tab delimited) type and then load.

High Level Steps to Create a Web Services DLR to Load Data

  1. Access the Data Load Rule page by navigating to Maintenance > DLR > Data Load Rules.

  2. Click the New Data Load Rule tab.

  3. Select Web Services as the Load Type on the New Data Load Rule page.

  4. Use external Data Integration tool (like Boomi) to load data to the Planful system.

  5. The steps will vary based on the external Data Integration tool used.

Steps to Convert a DLR Load Type

When you set up a Data Load Rule (DLR), you select how you want your source data (such as actuals) loaded to Planful. For example, you might load it in a file format or copy and paste the data. You might also use Boomi Connectors (Web Services). Let’s say you generally use the File load type to load data to Planful from your source system. But, you’ve decided to use connectors via Web Services to load your data. Instead of re-creating the entire DLR (Data Load Rule), you can simply edit it and change the Load Type. This flexibility saves time and ensures consistency in loading data when the same DLR is used.

You can switch between the types of DLRs:

  1. File Load

  2. Web Services

  3. Copy – Paste

  4. NetSuite Connect

Best Practices:

Complete all the steps in the DLR to effectively change the DLR type. Otherwise the DLR will be rendered unusable.

In the image below, a File type load has been used to load actuals.

In Practice: Changing the Load Type From File to Web Services

  1. Navigate to Maintenance > DLR > Data Load Rules.

  2. Select a DLR and double-click it

  3. Change the Load Type to Web Services and complete the remaining steps of Select Sample Input File, Define Overall Rule Settings, Manipulate Input File, Define Data Mappings and Load Data for the changes to take effect.

  4. To change the DLR back to File Load for Load Type:

  5. Double-click the DLR from the Data Load Rule List page.

  6. On the New Data Load Rule page, change the Load Type to File Load and complete the remaining steps of the DLR.

Steps to Convert a DLR with a Load Type of File, Copy Paste, or Web Services to NetSuite Connect

When you set up a Data Load Rule (DLR), you select how you want your source data (such as actuals) loaded to Planful. For example, you might load it in a file format or copy and paste the data. You might also use Boomi Connectors (Web Services). Let’s say you generally use the File load type to load data to Planful from your source system. But, you’ve decided to use NetSuite Connect to load your data. Instead of re-creating the entire DLR (Data Load Rule), you can simply edit it and change the Load Type. This flexibility saves time and ensures consistency in loading data when the same DLR is used.

Best Practices:

Complete all the steps in the DLR to effectively change the DLR type. Otherwise the DLR will be rendered unusable.

In the image below, a File type load has been used to load actuals.

In Practice

  1. Access the Data Load Rules pages by navigating to Maintenance > DLR > Data Load Rules.

  2. Double click a DLR on the Data Load Rule List page. The Data Load Rule opens.

  3. Change the load type as needed. In the example below, a File Load type is changed to NetSuite Connect.

  4. Complete the remaining steps from Select Sample Input File to the Load Data step to use the DLR.

To use NetSuite Connect there are required setup steps. Click here to learn all about NetSuite Connect and how to perform the setup.

Steps to Change a NetSuite Connect DLR Load Type to File Load, Web Services, or Copy Paste

  1. Access the Data Load Rules pages by navigating to Maintenance > DLR > Data Load Rules.

  2. Check the Usage Report of the NetSuite Connect DLR to find out if the DLR is being used in any Process Flow.

  3. If the DLR is being used in a Process Flow, as shown in the above screenshot, edit the Process Flow and remove the Task with that DLR. If there is only one Task in the Process Flow, then you will have to delete the Process Flow to continue changing the DLR type.

  4. Return to the Data Rule Page and double click a DLR on the Data Load Rule List page. The Data Load Rule opens.

  5. Change the load type as needed.

  6. Complete the remaining steps from Select Sample Input File to the Load Data step to use the DLR.

High Level Steps to Load Alternate Hierarchy Data Using an XLSX Source File

Note:
Before beginning a data load, the alternate leaf member must already exist in the main hierarchy.

To load data into an alternate hierarchy:

  1. Navigate to Maintenance>DLR>Data Load Rules.

  2. Click New Data Load Rule.

  3. Select the File Load load type, the Segment Hierarchies load item and the Segment load sub item.

  4. Click Select Sample Input File.

  5. Browse for the Sample Data File under the File Information section of the screen (cf. first figure, below) then click Next or Define Overall Rule Settings.

  6. Select Alternate Hierarchy from the Hierarchy Type section. If you set up an elimination company in Hierarchy Management and selected Yes for Elimination as shown below, there will be an option to tell the application to obtain a value for the elimination company from the file that contains the data you are loading to Planful.

  7. Select Yes for Include in Data File for the Elim Attribute to obtain the value from the data you are loading to Planful. This Parameter is available for Segment Hierarchy and Alternate Hierarchy data loads.

Note:
Elimination companies in the main company hierarchy do not require a new code for the alternate hierarchies.
  1. Define the remaining parameters for the data load (e.g., input file manipulation, data mappings).

  2. Click Load Data and then click Finish. The DLR for the alternate hierarchy is submitted for processing. A notification will be sent once processing is complete and the data is in the alternate hierarchy.

Note:
The follow special characters are restricted from use in the Leaf Code of a segment leaf member when creating segment hierarchy data or loading it to Planful: ‘ " ~ ` ! @ # $ ^ * ( ) + = [ ] { } \ | ; : < > , ? All other special characters documented in the Getting Started guide are supported.

How to Create a Data Load Rule to Process Workforce Planning Allocations

Complete the steps below to create a data load rule to process allocations in Workforce Planning.

Prerequisites

  • Choose Allocation Method by Department when setting up employee allocations. For information on how to do so, see the

  • Understanding Employee Allocations topic in the Workforce Planning Admin Guide.

  • Employees should have previously been populated in a Workforce Planning scenario.

  • Create a file that defines the allocation by all segments in the Common Chart of Accounts.

  • Define the file to include all fiscal years based on the budget input years defined in Configuration Tasks.

  • Define the allocation percentages for each month ensuring that the percentages total 100% for each month within a given fiscal year.

  • Once loaded, the allocation can be viewed in the home department of the employee.

In Practice

Steps to process allocations via a data load rule:

  1. Define a data load rule with the following parameters:
    • Load Type: Use either File Load or Copy-Paste

    • Load Item: Choose Workforce Planning

    • Load Sub-Item: Choose Workforce – Employee Allocations

  2. Click Next.

  3. Select Sample Input File and complete the following fields:

    • Row Containing Header – Select 1

    • Number of Rows to Skip – Select 1

    • Number of Source Fields – Select 22, this is the number of columns in the file format for this Data Load Rule

    • Number Format/Thousand Separator – Select Comma(,)

    • Number Format/Decimal Separator – Select Dot(.)

    • Date Format – Select MM/DD/YYYY

    • Data Load – Copy and paste from an Excel spreadsheet the formatted fields to be used in the DLR

  4. Click Next.

  5. Define Overall Rule Settings: Define the scenario and time mapping to be used for this data load rule.

    • Scenario – Select the Scenario the data will be loaded into.

    • Allocation Basis – This is predefined based on the Define Workforce Planning Budgeting Criteria Task, in this Allocation by Department

    • Time Mapping – Select Across Columns which defines a column for each month that allocations are processed against.

  6. Click Next.

  7. Manipulate Input File: - Confirm the columns align with the Source file.

  8. Click Next.

  9. Define Data Mappings – Select the correct field mappings to the columns identified in the source data file.

  10. Click Next.

  11. Confirm the scenario data is being loaded and click Finish. Use Verify Data to confirm data loads.

You will receive confirmation that data was loaded successfully, or any errors during the data load will be identified. If errors are found correct the errors in the source file and attempt to load again.

High Level Steps to Incrementally Load Workforce Data

To incrementally load Employee Workforce Data, complete the following steps:

  1. Navigate to Maintenance > DLR > Data Load Rules.

  2. Click the New Data Load Rule tab.

  3. Select a Load Type for your new data load rule.

  4. Select Workforce Data for the Load Item.

  5. Select Employee Data as the Load Sub Item.

  6. Continue with the load process. New employees will be appended and existing employees will be updated.

Note:
There are no limits on the number of employee records that can be loaded at a time.

Loading SSO Users

You cannot load SSO Authentication users using Data Load Rules. If there are a few users, you can manually add them from the user interface. If there are too many users to manually add, load the users with email and password as Native Authentication. Then, submit a Support case and the Operations group will change the user Authentication mode from Native to SSO.

You can load SSO users with Planful APIs. After creating users with Data Load Rules or the APIs, the authentication type must be updated by the Planful Operations team.

Creating New Data Load Rules - Explanation of All Fields with Examples

Data Load Rule List

This application page contains all data load rules that have been created and modified.

The Status indicates whether the latest data load was completed successfully. Red indicates a failure, green means the load was successful, and a gray hyphen indicates that no data loads have taken place.

Sort the list by clicking a column header to display the sort icon. Sort the list from bottom-up or top-down.

Delete

Planful checks for data loaded to the Actual scenario. If data is loaded for a locked period, data deletion is not allowed. Unlock the period and then click this button to perform the deletion.

Clear Data

Delete all data loaded to date for different scenarios using the selected Data Load Rule.

Note:
This feature is applicable for GL Data, Translations Data, Workforce - Employee Data.

For the Workforce Employee Data, you will receive an alert message if you click the Clear Data button and data exists. The alert message indicates that employee data exists and if you proceed, all employee details and associated data will be deleted. Clear Data deletes data from whichever scenario is selected and saved in the Define Overall Rule Settings step of the data load. If employees are mapped to any other scenario, the data is retained in that scenario.

Copy

Copy a selected data load rule. When the Copy Dataload page is opened, select a new name for the rule you are copying data to, or allow the system to assign a default name.

Once the copy is complete, the system will inform you, and the DLR List is automatically updated to include the new rule.

The following settings will be copied:

  • General settings (name, description, etc.)

  • Sample input file format information

  • Overall rule settings

  • Metadata (data mappings and manipulations)

Note:
If you are loading actuals across fiscal years and rolling balances forward, the data must be loaded using the same Data Load Rule as the prior year.

Deleting a Data Load Rule

Once a data load rule is created, you can delete the rule without affecting the loaded data (with the exception of the GL Data type load). For example, if you created a rule to load Segment Hierarchies and then delete the rule after you load the segments and hierarchies, the rule will be removed from the application but the segments and hierarchies you loaded will remain. In some cases, it's a good idea to keep data load rules that you create for reuse purposes. For example, you can reuse a rule to mass-load new members for a dimension or update properties.

Note:
To load a Text file, save the file as a txt (tab delimited) type and then load.

New Data Load Rule Page Description of Fields

Access the New Data Load Rule page by navigating to Maintenance > DLR > Data Load Rules. Click the New Data Load Rule tab.

Load Type

Load Types represent the way in which you plan to load data to Planful. Select from the following options:

  • File Load – Load a file that exists outside of the Planful system. For .csv file types, each column must contain data that is separated by a comma. For .txt files, make sure each column contains data that is separated by a tab.

  • Copy-Paste – Copy and paste cells directly from a spreadsheet to the Planful system.

  • Web Services - Load data through an external Data Integration tool (like Boomi). If you create a Web Services DLR and want to change the DLR to a File Load type, you can do so. Make sure you complete all steps on Select Sample Input File, Define Overall

  • Rule Settings, Manipulate Input File, Define Data mappings and Load Data for the changes to take effect.

  • Cloud File Upload - Load data through Box. The difference between the File Load and Cloud File Load is that data for the

  • Cloud File Upload is loaded to Planful from a cloud-based service provider (i.e., Box) whereas data for a File Load is loaded from a non-cloud based source (i.e. hard drive).

  • Google Drive File Upload - Load data via Google Drive.

  • NetSuite Connect - Load data through a NetSuite integration.

Once a Cloud File Data Load Rule is configured, you can include the rule in a Cloud Scheduler task, job, and process flow. When the process flow is executed, the Planful system automatically connects to your Box account, looks for relevant data, and extracts that data to Planful. The data load is then categorized as a Success or Failure in Box. The status (successes and failures) of Data Load Rule cloud file uploads is recorded within the Detail Log accessed on the Job Managerpage. 

Best Practices - Specific to Box

  • Set the Automatic Cube Refresh at the Define Overall Rule Settings step in the Data Load Rule to No.

  • Set up Cloud Scheduler so that the Scenario process is dependent upon the Data Load Rule.

  • Process the scenario after loading the Box file and have a separate task for each in Cloud Scheduler (shown in the image below).

Load Item

Select from the following load items:

  • Segment Hierarchies- Select specific segments to load in the Load Sub Item list box.

    Note:
    The follow special characters are restricted from use in the Leaf Code of a segment leaf member when creating segment hierarchy data or loading it to Planful: ‘ " ~ ` ! @ # $ ^ * ( ) + = [ ] { } \ | ; : < > , ? All other special characters documented in the Getting Started guide are supported.
  • Data - Select detailed data (i.e. Sales Data, GL Data, etc.) in the Load Sub Item list box and load all associated information. Attributes - Load the values associated with the attribute. Once loaded, you can view the values on the Attribute Members List page. Important! When loading attributes, the leaf name AND code must be included.

  • Attribute Hierarchies - Load values for an attribute in an attribute hierarchy.

  • Entity Hierarchies- Load entity hierarchies, like the Budget Entity Hierarchy for example.

  • User Information - Load multiple users at once with all associated information.

    Note:
    You can also load and view users using the User Management page.
  • Workforce Data - Load Workforce Data, like employee data for example.

  • Currency Exchange Rates - Create a data load rule for currency exchange rates.

Note:
Currency conversion with currency exceptions are calculated when loading data via Data Load Rules. However their are limitations. When Currency Type is set to not translated (NT) or the exchange rate is not up-to-date for a currency type that is an exception, the conversion process does not result in the correct calculations. The workaround is to make sure you enter an exchange rate. If NT is required, create a currency type with NT (on the Currency Type page) AND enter zero (0) as the exchange rate (on the Exchange Rates page).

Load Sub Item

The data populated/available options is based on the selected Load Item.

Segment Hierarchies

Available segments are listed. The options available are populated based on system-delivered/defined segments and segments you create.

Data

When loading GL Data, Sales Data, Sales Price, Transaction Data, and Translations Data, it is best to perform a sequential load. For example, if you load data for October 2018 after loading data for November 2018 you will need to load October and November to calculate the MTD/QTD/YTD information correctly for reporting purposes.

When data is loaded, Planful automatically calculates either MTD or YTD values based on the data load rule setting.

YTD setting: MTD=YTD current period - YTD prior period

MTD setting: YTD=MTD current period + YTD prior period

Account Based: If account type is Flow then YTD=MTD current period + YTD prior period. If account type is Balance then MTD=YTD current period - YTD prior period.

With this release, a new version of Transactions Data Version 2 has been introduced to the data load process.

Transaction Data V2

The Transaction Data Version 2 provides fast and more efficient processing of the Data Load Rule. You can select the Transaction Data V2 option from the Load Sub Item drop-down list in the New Data Load Rule page. The Transaction Data Load Rule process is not changed only the processing speed has been increased.

Note:
You cannot rollback to Transactions Data version 1 once you start using version 2.

Attributes

Available attributes are listed. The options available are populated based on system-delivered/defined attributes and attributes you create using the Attributes page. 

Important!
When loading attributes, the leaf name AND code must be included.

Attribute Hierarchies

Available attribute hierarchies are listed. This list is populated based on the attribute hierarchies defined on the Attribute Hierarchy page.

Entity Hierarchies

Budget Entity Hierarchies contain entities based on attributes or dimensions. Select this option to load data to defined budget entities and associate the children of the hierarchy (lowest level members) created using the Budget Hierarchy Entity Setup page.

Workforce Data

You can load employee data directly to a target scenario other than the Default scenario. Select the target scenario on the Overall Rule Settings page.

  • Workforce - Employee Data - Load employee position and attribute information or perform a complete load of employee information including compensations.

  • Workforce - Employee Compensation Item - Create a data load rule to update Salary (full-time or hourly wage type compensation items) and Direct Data Input compensation items in the Default Scenario. This functionality allows you to incrementally update Workforce data versus reloading all Workforce data.

Note:
Compensation Item Loads require that you load all compensation items for an employee.

The following table reflects the columns that are required for this type of data load and some example row data:

Employee #Plan Pos./Pos. Desc.Compensation CodeRateHourDefault ValueDescription

 

13400Default Pos

FT

0

0

70000

Full-time / Annualized employee

 

13400Default Pos

Commissions

0

0

100

Direct Data Input compensation item

 

13400Default Pos

PT

40

2000

0

Hourly employee

  • Workforce - Employee Allocations- Load employee allocations (allocation percentages) for all periods of each fiscal year (across cost centers). For example, if the start year is 2018 and the scenario covers a two year time period, the load should contain allocations for 2018 and 2019. Also, each allocation needs to total 100%. For example, if an employee expenses relate to two different departments, the compensation must allocate to 100% across both departments.

Currency Exchange Rates

  • Load Exchange Rates - Load exchange rate details (conversion calculations, currency types, etc.). Exchange rates are generally calculated based on the exchange rates entered on the Exchange Rate page. Exceptions are not taken into consideration when exchange rates are loaded.

  • Exchange Rates with Translations - This option allows you to load exchange rate data and translate the data from a local currency to a common currency. 

    Translations provide a mechanism for organizations to map the data from the local chart of accounts of the reporting site (subsidiary or a partner company) with the common reporting chart of accounts used in the Planful suite.

Description of Fields on the Select Sample Input File Page

Select file requirements or the structure of the data you are loading. If you are performing a Copy Paste or Web Services load, this page allows you to specify parameters for incoming data. Different fields are available based on the Load Type selected on the New Data Load Rule page.

Note:
To load an Excel file, save the file as a txt (tab delimited) type and then load.

Let's say you loaded data using Data Load Rules with a File load type. Then, you want to add extra data columns, which you want reflected in the load. To make this modification and load data, change the number of columns and browse to the sample input file to change the structure. Lastly, map the new columns on the Define Data Mappings page before you perform the load.

If you are having any difficulties loading the sample data file, make sure the source file does not contain formulas linked to other files on your computer. Files with formula references are not supported.

***The format of your data spreadsheet (i.e. Excel) must match the Select Sample Input file specifications.

File Requirements - These fields are available if you selected a File Load Type

  • File Type – Select the type of file you want to load; Excel (xls), text (txt) or csv (comma delimited value).

  • Column delimiter- Available if you select to load from a txt or csv file. A column delimiter separates each column with a tab, semicolon, or comma. xls files already contain column separators and therefore this option is not available.

  • Row Identifier- Specify the character that separates rows in the Data Load file. The row delimiter is important because the system needs to know when a row begins and ends. The identifier marks the beginning/ending of rows in your txt and csv files.

  • Text Qualifier - Specify the character you want the system to use to enclose values in the file. This function is important because it separates the field data from the delimiter. You may select to separate the two using single quotes or double quotes. If you choose none, there will be no separator to distinguish between the delimiter and field data.

Note:
The File Requirements / Information fields are opened for file loads only.

Header

  • Row Containing Header- Enter the number of the row that contains the header information. A header contains the heading names for a column.

  • Number of Rows to Skip- Enter the number of rows in your spreadsheet to skip. For example, if your spreadsheet has 20 rows, but the first 3 rows contain data that is not important you can enter 3 in this field. The system will skip the first 3 rows during the data load process.

  • Number of Source Fields - Enter a number based on the number of columns you plan to select to copy from your spreadsheet, txt file, or csv file. For example, if your source file contains 10 columns, enter 10 here. For Copy Paste loads, the graph automatically reflects the number you enter.

Number Format

Different countries use different ways to represent numbers. For example, 6 euros and 28 cents needs to be loaded as 6,28 whereas 6 dollars and 28 cents should load as 6.28. The Thousand and Decimal Separators allow you to specify the format in which you want to load your numerical data.

  • Thousand Separator - Select a period or comma to separate thousands. In general, non-USD based companies will set the thousand separator to period and the decimal separator to comma.

  • Decimal Separator - Select a period or comma to separate decimals. In general, USD based companies will set the thousand separator to comma and the decimal separator to period. This is the default setting.

Note:
Thousand Separator and Decimal Separator must be different. For example, you cannot select a comma for the Thousand Separator and a comma for the Decimal Separator.

Date Format

Select the format for the date to appear in the resulting load. Different countries use different ways of representing a day within a calendar year. For example, in Hungary the date is represented as Year/Month/Day. In the U.S., several organizations represent a date by

Month/Day/Year. Essentially, there are several ways in which you may choose to represent a date to best coincide with your organization's current practices.

File Information - Available for File Load Types

Sample Data File - Click Browse to search for the sample data file.

Copy - Paste Data

If you select to Copy Paste data, a grid will appear. Copy data from your file and paste it into the provided grid. Paste data into the grid by selecting the whole grid (click on the cell in the upper left hand corner).

Note:
You can copy and paste up to 10,000 rows (which is the default). The system automatically deletes empty rows, which will not be displayed on the Manipulate Input File page.

Define Overall Rule Settings

Overview

Select parameters, defaults, and segment information to further define the way you want to load data. Different fields are available based on the Load Type. Documentation is grouped based on load types.

Description of Fields for a Segment Hierarchies Load

Hierarchy Type - Select Main Hierarchy unless you are loading data to an Alternate Hierarchy.

Member

Allow Move - Select to allow parent changes for existing members.

Allow Property Change - Select to allow Property Changes (i.e. name, currency, or other system properties) for existing members.

Note:
If you do not select the Allow Property Change check box, two parent entries will be created with the same code and two names (one of which will be empty). The default behavior for nodes is: Parent nodes - uniqueness is maintained at the Code-Name level. Leaf nodes uniqueness is maintained at the Code level.
Note:
The Allow Property Change check box is enabled for the NetSuite Connect DLR. This functionality is applicable only for the Parent Child Segment Hierarchies load.

Build

  • Level- Build a hierarchical level.
    • Select Number of Levels - Available if you select the Level option. Select the number of levels you want your hierarchy to consist of.
  • Parent Child - Build a parent child based hierarchy.

Parent Child Versus Level Based Hierarchies

A Parent Child hierarchy is one where you have a child field and a parent field. For example, let's say you have a table with employee department details and associated sub-ordinates.

ChildParent

Matt

?

Brian

Matt

Jim

Matt

Katy

Brian

Matt is the ultimate boss of the department. Matt has 2 sub-ordinates: Brian and Jim. Brian's sub-ordinate is Katy.

A Level Based hierarchy has the following structure.

Level 1Level 2Level 3

Matt

Brain

Katy

MattBrianKaty

Ragged hierarchies is supported (hierarchies with varying levels of tree depth) while loading Level based segment hierarchies using a Data Load Rule.

Note:
The intermediate Parent Code and Parent Name in a hierarchy cannot be empty.
Note:
Level Based Hierarchy load does not support the loading of roll up level without any leaf level members.

Parameter

The parameters listed are those available to be included in the data file. If an elimination company was set up in Hierarchy Management, you may have a default Elim Attribute parameter (as shown below). Select Yes for Include in Data File for the Elim Attribute to obtain the value from the data you are loading to Planful. This Parameter is available for Segment Hierarchy and Alternate Hierarchy data loads.

Include in Clear Data

Select segments to clear or retain data for. For example, if you load data for Company A and then want to use the same data load rule to load data for Company B, the data loaded for Company A will not be deleted if you do not choose to clear the data.

Include in Data File

Select Yes if a dimension is not defined within the data file and there is a common member for all records.

Select No if you do not want the listed parameters in the data file.

Select

Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Description of Fields for a Data Load

Parameters

  • Reporting-Select Common Currency (CC) if you are loading legal entity data in a common currency. Otherwise, select Local Currency (LC). .

  • Credit/Debit – Select No if account balances are in their natural trial balance format. If not, select Yes. The calculation for Debit and Credit accounts works as follows when "Credit/Debit" is set to 'Yes' while loading data:

    • For Debit Accounts - Amounts loaded for a debit account will be displayed "as is" in reports. For example, if you want to see a negative number for a debit account, load a negative number for that account while uploading data into the system.

    • For Credit Accounts - By default, the number being loaded is negated and stored in to the system. So, if an amount of 100 is loaded, the number is stored internally as -100. The 'Verify Data Report' will show -100, as (100 X -1) while storing the data.

    • For a Credit account, by default, the number being loaded would be negated and stored in to the system. So, if an amount of 100 is loaded, the number would be stored internally as -100 and hence the 'Verify Data Report' would show -100, as (100 X 1) is done while storing the data.

  • Data Type – Select YTD format for balance sheet and income statement data loads. If you select MTD, there must be an opening balance for data. Select Account Based to format the balance sheet based on an account.

  • Calculate Trial Balance – Select Yes to calculate the trail balance. Credits and Debits must be balanced in order for the data to load. Remember that a Trial Balance is a list of all general ledger accounts contained in an organization's ledger. When the trial balance option is available, the data load file is rejected if the validation check for the trial balance fails.

  • Currency Conversion – Select Yes to translate the data from local currency to common currency. Select No to indicate that a translation does not need to be processed.

  • Automatic Cube Refresh - Select to automatically have the system refresh the reporting cube. The cube must be processed after the load is complete to view translated data in reports.

  • Scenario – Select the default scenario to load data. If you select a Budget Scenario, Forecast Scenario or a Preload Scenario for a GL or Translation and select Month as the Frequency, the Across Columns by Scenario Time Mapping value is available.

  • Frequency – Select Monthly for all Financials. For Scorecards you may select Day, Quarter, or Year.

  • Time Mapping – Select to indicate if time periods are defined across the columns or whether one period column is defined with the values stored in the rows. If you select Across Columns by Scenario, the Segments (subject to default settings), Fiscal Months and Opening Balances are available in data mappings. Fiscal Year is not available. In the DLR default settings, Fiscal Year and Fiscal Month are not available. Time Mapping Across Columns by Scenario is available for Sales Price and Sales Data if a budget scenario is selected while loading Sales Price/Sales Data. Fiscal Year and Fiscal Month fields will be disabled in Default Settings. Fiscal Year will be disabled in Data Mappings.

  • Process Employees- Select this checkbox to process employees using DLR. If this check box is selected and data is loaded successfully using DLR, then a request to process all the employees will be submitted. The default value is No. For the existing DLRs, this check box will not be selected.

    Note:
    The request to process all the employees will be submitted only if the Enable Delta Processing is enabled in the Define Workforce Planning Budgeting Criteria page of Configuration Tasks.

The check box is available for the following DLRs:

  • Workforce - Employee Data

  • Workforce - Employee Compensation Item

  • Workforce - Employee Allocations

Default Value for Missing Periods

Define data calculations for missing periods at DLR level with GL Data and Translations sub type.

  • Enable new Data Calculations —Select this checkbox to enable data calculations for missing periods (selected by default).

  • Overwrite Global Settings —Select this checkbox to overwrite values for flow accounts and balance accounts.

  • Flow Accounts —Select YTD or MTD flow account to overwrite.

  • Balance Accounts —Select YTD or MTD balance accounts to overwrite.

Email Recipients

By default, the e-mail recipients added to the Cloud Services Configuration Task are carried forward to the Cloud Services Data Load Rule, but you have the flexibility to edit the recipients specific to each Cloud Services Data Load Rule created. You can edit email recipients here.

Clear Data Configuration - Include in Clear Data for GL and Translations Data

For Clear Data Combination, Scenarios are never in the file. They are defined by the DLR settings on the same page. So, the file has nothing to do with which Scenario is updated/cleared. If you don’t select any checkboxes, the system will clear all data for the Time periods and Scenarios found in the load file. If you select a box, for example Company, the system will clear all data for the Time periods and Scenario but only for the companies found in the load file. The more boxes you select, the more granular/precise the clear based upon the file being loaded.

Clear Data Combination functionality can be used in two ways:

  1. Uncheck all dimensions except the dimension that drives your currency (legal entity like Company)- to clear all data loaded to date for legal entities and periods in the file and replace with the new file.

  2. Check all dimensions - to update only the GL combinations present in the file and not clear any other combinations previously loaded using this rule.

If you are unsure or confused on how to use this functionality, you should start with all of the boxes unchecked. This will ensure that every subsequent file load will clear all data previously loaded to that scenario for the fiscal period(s) in the current load file.

The most common situation where you might need to check a box is when you are loading multiple files using the same DLR to the same fiscal period. Here are two examples:

  • If you are loading multiple Companies using separate files, you would check the Company box. This would ensure that ONLY the data for the company in the next load file would be cleared. Other companies loaded prior to this file would remain unchanged. 

  • Sometimes something similar happens with the Account dimension. For example, you might have 1 file that contains trial balance information and another file that contains data for statistical accounts. The files might have the same layout and use the same DLR. In this case, you would check the Account dimension only so that loading the statistical amounts after the trial balance amounts will NOT clear the trial balance amounts.

Clear data combination allows you to clear existing data prior to loading new data based on selected segment combinations.

  • Segment - Select segments to update the records in the database for the named data load rule. The Legal Entity dimension that affects the currency in the application will always be grayed out.

  • Include in Clear Data- Select this checkbox to clear all records and reload what is in the file for all periods included in the data load rule.

When All Segments are checked, the system will overwrite only the associated ProfitCenter_CostCenter and Account combinations in the file.

When Account is not checked, the system will overwrite data loaded previously using the Data Load Rule (DLR) at the ProfitCenter_CostCenter level. For example, if you are loading data to the file for the ProfitCenter_CostCenter member “0001” only, all account data previously loaded using the DLR for “0001” will be deleted for the period and reloaded based on the file.

1. Loading Data with all Segments Checked in Clear Data Combination

Below is a sample of the original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBuisnessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22

2010

Default

1610

Default

Default

Default

Default

Default

2022

161

161

161

161

2010

Default

3320

Default

Default

Default

Default

Default

2022

332

332

332

332

2100

Default

3320

Default

Default

Default

Default

Default

2022

444

444

444

444

New data is then loaded using a data load rule, with all segments checked in the Clear Data Combination option:

Data Loaded

CompanyDeptAcctProductCostCenterBuisnessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22

2100

Default

3320

Default

Default

Default

Default

Default

2022

555

555

555

555

2120

Default

3320

Default

Default

Default

Default

Default

2022

212

212

212

212

2140

Default

3320

Default

Default

Default

Default

Default

2022

214

214

214

214

The resulting data contains the previously loaded data along with the newly loaded data, with any updates:

Resulting Data, with all Segments Checked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
16102010DefaultDefaultDefaultDefaultDefaultDefault161161161161
33202010DefaultDefaultDefaultDefaultDefaultDefault332332332332
33202100DefaultDefaultDefaultDefaultDefaultDefault444444444444
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214

Conversely, when all Segments in Clear Data Combination are unchecked, data that was previously loaded by this data load rule and data for the time period included in the data load rule is cleared, and the new data is loaded.

2. Loading Data with all Segments Unchecked in Clear Data Combination

Below is a sample of original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

New data is then loaded using a data load rule, having all segments unchecked in the Clear Data Combination option:

Data Loaded

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2100Default3320DefaultDefaultDefaultDefaultDefault2022555555555555
2120Default3320DefaultDefaultDefaultDefaultDefault2022212212212212
2140Default3320DefaultDefaultDefaultDefaultDefault2022214214214214

The resulting data clears data that was previously loaded by this data rule and for the time period included in the data load rule, and loads the new data:

Resulting Data, with all Segments Unchecked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
33202100DefaultDefaultDefaultDefaultDefaultDefault555555555555
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214
Note:
The Default Clear Data Combination is to have all segments checked. In this case, no existing data is cleared by the load rule. New data is loaded from the data load rule and the existing data remains. For data that is loaded to existing intersections, the new data updates the existing data.

3. Loading Data with Only One Segment Checked in Clear Data Combination

Below is a sample of original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

New data is then loaded using a data load rule, having only one segment (Company) checked in the Clear Data Combination option:

Data Loaded 

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

The resulting data contains the previously loaded data along with the newly loaded data, with any updates:

Resulting Data, with only one Segment (Company) Checked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
16102010DefaultDefaultDefaultDefaultDefaultDefault161161161161
33202010DefaultDefaultDefaultDefaultDefaultDefault332332332332
33202100DefaultDefaultDefaultDefaultDefaultDefault555555555555
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214

4. Loading Data with more than One Segment Checked in Clear Data Combination

Below is a sample of original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

New data is then loaded using a data load rule, having two segments (Company & Accounts) checked in the Clear Data Combination option:

Data Loaded 

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2100Default3320DefaultDefaultDefaultDefaultDefault2022555555555555
2120Default3320DefaultDefaultDefaultDefaultDefault2022212212212212
2140Default3320DefaultDefaultDefaultDefaultDefault2022214214214214

The resulting data contains the previously loaded data along with the newly loaded data, with any updates:

Resulting Data, with only one Segment (Company) Checked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
16102010DefaultDefaultDefaultDefaultDefaultDefault161161161161
33202010DefaultDefaultDefaultDefaultDefaultDefault332332332332
33202100DefaultDefaultDefaultDefaultDefaultDefault555555555555
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214

Clear Data Combination- Clear all data prior to new data load for Transaction Data

If the 'Clear all data prior to new data load' box is unchecked, the Transaction Data Load will clear transactions only for the exact GL combination strings contained in the load file. 

The system does not allow duplicate transaction data for a GL combination. If you want to update a specific transaction, uncheck the box and upload all transaction data for that specific GL combination.

Parameter

The parameters listed are those available to be included in the data file.

Transaction Details

Transaction details can be setup on the Transaction Details Setup page.

Attribute type - The Common COA (Chart of Accounts) segment defaults are displayed based on the setup on the Define Financial Segments page

Attribute name - The name of the attribute based on the segments defined for the Common COA and attributes setup for those segments. For example, some Common COA names are; Company, Account, Project.

Attribute data type -

  • varchar - Associates a data type with a variable character field. Variable character fields have a user-defined length and can hold letters and numbers.

  • URL - Load a reference URL. You can click the URL hyperlink when you view the Transactions Drill-Through reports.

Attribute length - The maximum amount of characters for the field.

Include in Data File

  • Select Yes if a dimension is not defined within the data file and there is a common member for all records.

  • Select No if you do not want the listed parameters in the data file.

Default/Select

Available if you select No for a parameter. Search for other dimensions to associate with the data file.

1. Loading Data with Checked Clear all data prior to new data load Checkbox

Below is a sample of original data that had been previously loaded with a data load rule:

Original Data

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL
1943-Balance-YTD_V12091LC5351009210091PROD 011100-9011103315-01-202215001P007126150http://www.yahoo.com
1943-Balance-YTD_V12091LC

5351009210091PROD 011100-9011103316-01-202216001P007127160www.gmail.com
1943-Balance-YTD_V12091LC

5351009210091PROD 011100-9011103317-01-202216301P007128170http://www.google.com
1943-Balance-YTD_V12091LC

5351009210091PROD 011100-9011103318-01-202216401P007129180http://www.yahoo4.com
1943-Balance-YTD_V12091LC531009210091PROD 011100-9011103319-01-202216402P007130190http://www.yahoo4.com

New data is then loaded using a data load rule, having two segments (Company & Accounts) checked in the Clear Data Combination option:

Data Loaded 

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL

4018-Flow-MTD_V1

2091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.google.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103318-01-202225004P007129180http://www.yahoo4.com
4018-Flow-MTD_V12091LC531009210091PROD 011100-9011103319-01-202225005P007130190http://www.yahoo4.com

The resulting data clears data that was previously (Account 1: 1943-Balance-YTD_V1) Account loaded by this rule and replaced with the below 5 (Account 2: 4018-Flow-MTD_V1) lines loaded.

Resulting Data

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL

4018-Flow-MTD_V1

2091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.google.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103318-01-202225004P007129180http://www.yahoo4.com
4018-Flow-MTD_V12091LC531009210091PROD 011100-9011103319-01-202225005P007130190http://www.yahoo4.com

2. Loading Data with Unchecked Clear all data prior to new data load Checkbox

Below is a sample of original data that had been previously loaded with a data load rule:

Original Data

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL
1943-Balance-YTD_V12091LC5351009210091PROD 011100-9011103315-01-202215001P007126150http://www.yahoo.com
1943-Balance-YTD_V12091LC

5351009210091PROD 011100-9011103316-01-202216001P007127160www.gmail.com
1943-Balance-YTD_V12091LC

5351009210091PROD 011100-9011103317-01-202216301P007128170http://www.google.com
1943-Balance-YTD_V12091LC

5351009210091PROD 011100-9011103318-01-202216401P007129180http://www.yahoo4.com

New data is then loaded using a data load rule, having Unchecked Clear all data prior to new data load Checkbox.

Data Loaded 

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL

4018-Flow-MTD_V1

2091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.google.com

The resulting data will add newly loaded 3 lines of data for the Account - ‘4018-Flow-MTD_V1’ to the existing data.

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL
1943-Balance-YTD_V12091LC5351009210091PROD 011100-9011103315-01-202215001P007126150http://www.yahoo.com
1943-Balance-YTD_V12091LC5351009210091PROD 011100-9011103316-01-202215001P007127160www.gmail.com
1943-Balance-YTD_V12091LC5351009210091PROD 011100-9011103317-01-202215001P007128170http://www.google.com
1943-Balance-YTD_V12091LC5351009210091PROD 011100-9011103318-01-202215001P007129180http://www.yahoo4.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_V12091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.yahoo4.com

Description of Fields for an Attributes and Attribute Hierarchies Load

Important! When loading attributes, the leaf name AND code must be included.

Member

Allow Move - Select this checkbox while loading segment hierarchies to allow Parent Changes for existing members.

Allow Property Change - Select this checkbox while loading segment hierarchies to allow Property Changes (i.e. name, currency, or other system properties) for existing members.

Note:
If you do not select the Allow Property Change checkbox, two parent entries will be created with the same code and two names (one of which will be empty). The default behavior for nodes is: Parent nodes - uniqueness is maintained at the Code-Name level. Leaf nodes - uniqueness is maintained at the Code level.

Build

Level - Select to build a hierarchical level.

Select Number of Levels - This option is available if you select the Level option. Select the number of levels you want you hierarchy to consist of.

Parent Child - Select to build a parent child based hierarchy.

Member

Allow Move - Select this checkbox while loading segment hierarchies to allow Parent Changes for existing members.

Allow Property Change - Select this checkbox while loading segment hierarchies to allow Property Changes (i.e. name, currency, or other system properties) for existing members.

Note:
If you do not select the Allow Property Change checkbox, two parent entries will be created with the same code and two names (one of which will be empty). The default behavior for nodes is: Parent nodes - uniqueness is maintained at the Code-Name level. Leaf nodes uniqueness is maintained at the Code level.

Build

Level - Select to build a hierarchical level.

Select Number of Levels - This option is available if you select the Level option. Select the number of levels you want you hierarchy to consist of.

Parent Child - Select to build a parent child based hierarchy.

Description of Fields for Entity Hierarchies

Member

Allow Move - Select this checkbox while loading segment hierarchies to allow Parent Changes for existing members.

Allow Property Change - Select this checkbox while loading segment hierarchies to allow Property Changes (i.e. name, currency, or other system properties) for existing members.

Note:
If you do not select the Allow Property Change checkbox, two parent entries will be created with the same code and two names (one of which will be empty). The default behavior for nodes is: Parent nodes - uniqueness is maintained at the Code-Name level. Leaf nodes uniqueness is maintained at the Code level.

Build

Level - Select to build a hierarchical level.

Select Number of Levels - This option is available if you select the Level option. Select the number of levels you want you hierarchy to consist of.

Parent Child - Select to build a parent child based hierarchy.

Entity Type

Entity types may be based on attributes or segments.

Parameter

The parameters listed are those available to be included in the data file.

Include in Data File

Select Yes if a dimension is not defined within the data file and there is a common member for all records.

Select No if you do not want the listed parameters in the data file.

Select

Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Description of Fields for a User Load

Parameter

The parameters listed are those available to be included in the data file.

Include in Data File

Select Yes to include role in data file.

Select No if you do not want the role in the data file.

Select

Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Description of Fields for an Workforce Data Load

Load Columns

Click to open the Load Columns dialog page to select additional columns you want to load.

  • Click a row.

  • Click the Save button.

  • The column will be added to the Data Load Column list.

Note:
Hold down the control key on your keyboard and select multiple columns at once.

In addition to the current default data load columns populated, the following required columns are displayed: 

  • Budget Entity Code

  • Budget Entity Segments (COA segments form the Budget Entity)

  • Rate per Hour

  • Work Hours - Basis (Annual / By Month)

  • Salary - Basis (Annual / By Month - the wage Type compensation item)

*For Review percentages, the ‘Multiple Reviews’ column is available in the Load columns selector. If selected, the columns related to the effective dates and percentages become available.

Scenario selection in Workforce – Employee Data Load Rule (DLR)

You can retain the Data Load Columns when you change the Scenario in the Define Overall Rule Settings tab for a Workforce – Employee Data DLR.

For example, when you select Scenario 2 after selecting Scenario 1 from the Scenario drop-down list, the Scenario Change pop-up window displays a confirmation message to retain the columns and if you click:

  • Yes: The columns from Scenario 1 will be retained for Scenario 2.

  • No: The columns from Scenario 1 will not be retained for Scenario 2. If there are any columns saved against the DLR for Scenario 2 then those columns will be displayed along with the default columns.

To save the column settings for a scenario, click Next.

Note:
When you load columns in Scenario 1 and then select Scenario 2, a confirmation message is displayed to retain the columns. If you click Yes, the columns from Scenario 1 are retained for Scenario 2 even if you do not save the columns for Scenario 1.

The image below displays the Scenario Change pop-up window when you select Budget 2017 scenario after selecting Budget 2013 scenario.

Scenario

Select the scenario you want to load employee data directly to from the list box. The scenarios populated in the list box include those created using the Scenario Setup page.

Note:
For forecast scenarios, while loading allocations with Across columns data load format, it is not possible to select only the open periods for the load for data mappings. We recommend that you use the Down Rows and Across columns by Scenario options to load the allocations to the forecast scenarios if there are any closed periods.

*If annually varying compensation items are created in the scenario, relevant columns become visible for mapping automatically. They are otherwise hidden.

Payroll upload Date

Select the date for which you want to upload payroll data.

Data Load Column (Type)

Contains a list of columns that will be displayed (by Display Name) in the resulting load. Change the order in which the columns are displayed using the arrows to move the columns up and down the list. There are 5 system delivered/defined columns, which are:

  1. Employee Number

  2. Hire Date

  3. Pay Plan Code

  4. Employee Name

  5. Employee Type Code

Note:
System delivered/defined columns can not be deleted.

Display Name

The name of the Data Load Column. 

Basis

The basis is used to calculate compensation amounts. This field is available for selection for compensation items.

  • Annual - Compensations amounts will be calculated on an annual basis.

  • By Month - Compensations amounts will be calculated on a monthly basis.

Default Value

This field is available for Position Name, Employee Name and Pay Plan Code. These three columns are system-defined and delivered and may not be deleted. Enter a default value to populate the column in the resulting load.

Time Mapping

Down Rows - Selecting this option indicates that time periods are defined down the rows.

Across Columns - Selecting this option indicates that time periods are defined across the columns.

Parameters for loading Wage Type Compensations

You can load Wage Type compensations (i.e. Salary) for compensation items with a Wage Type or Wage Type without Reviews basis.

The following options are available for Salary and Wage Hours parameters:

  • Annual Salary - Annual input value.

  • Monthly Salary - Monthly input value, which is calculated by multiplying this number times the number of fiscal periods.

  • Annual Wage Hours - Annual input value, which is calculated by dividing this number by the number of fiscal periods.

  • Monthly Wage Hours - By period input value.

Note:
Rate, Hour and Default Value columns cannot be blank. Load a ‘0’ (Zero) value when a field is not applicable for the compensation.

Description of Fields for a Currency Exchange Rates Load

Scenario

Select the Scenario for which you want to load exchange rates. Scenarios represent versions of a budget or forecast for a fiscal year and are created on the Scenario Setup page.

If you select a Budget type scenario, the Time Mapping Across Columns by Scenario option is available.

Apply Currency Conversion

Automatically converts local currencies to a common currency. 

Time Mapping

Across Columns by Scenario is available for Currency Exchange Rates if a budget scenario is selected while loading Exchange Rates.

Fiscal Year and Fiscal Month will be disabled in Default Settings. Fiscal Year will be disabled in Data Mappings.

Down Rows - Selecting this option indicates that time periods are defined down the rows.

Across Columns - Selecting this option indicates that time periods are defined across the columns.

Parameter

The parameters listed are those available to be included in the data file.

Include in Data File

Select Yes if a dimension is not defined within the data file and there is a common member for all records.

Select No if you do not want the listed parameters in the data file.

Select

Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Selection Translation(s)

Select currency translations to change the value of the source currency into another (foreign) currency target value.

Manipulate Input File

Overview

Manipulate the file or format of the data from which you are loading data. The top pane displays the raw data, and the bottom pane displays the manipulated data. Manipulation options include:

  • Join - merges multiple columns into a new column.

  • Split - divides a column into multiple columns.

  • Move - changes the display order of the columns.

  • Sort - categorizes or sorts the data based on a specific column.

  • Replace - finds and replaces for the selected column.

  • Create - creates a new column with a user-defined value.

  • Headers - allows the creation of column headings for the data.

  • Manipulations - provides a view of all the manipulations applied to the data.

Description of Fields for the Manipulate Input File Page

Join

Join two columns into one column by providing a Join Column Name and Join Operator.

Join Column Name - Select the columns you want to join by pressing the Shift key on your keyboard while selecting the columns. Enter the new name of the column you are creating with the join. For example, a Join Column Name might be COL3 for joined columns 1 (COL1) and 2 (COL2).

Join Operator:

  • None - No join operator.

  • Underscore - Add an underscore. For example, if you add column 6 containing X with column 7 containing Y, the joined column 8 will contain X_Y.

  • Space - Add a space between the contents of the two joined columns.

  • Dot - Add a dot between the contents of the two joined columns.

  • Hyphen - Add a hyphen between the contents of the two joined columns.

  • Add New - Add the newly joined column to the spreadsheet on the Manipulate Input File page.

Split

Split one column into two columns by providing Split at, Split Column Name 1, and Split Column Name 2 information.

Split at - Select the column you want to split. Enter the point where the split will occur within the column.

Split Column Name 1 - The name of the first column you are creating by performing the split.

Split Column Name 2 - Enter the name of the second column you are creating by performing the split.

Add New - Split the column and display the new columns on the spreadsheet on the Manipulate Input File page.

Move

Move the placing of the column up, down, to the top or to the bottom of the grid. 

Sort

Sort the columns.

Replace

Replace a selected column name on the Replace page.

Replace/With - Enter the information you want to replace (in the Replace field) with the information you are replacing it with (in the With

field).

Replace Entire Sheet - Replace the entire grid of columns. If this checkbox is not available, the replace will only happen for a selected column. By default, Replace Entire Sheet is deselected.

Match Entire Cell Content - Replace partial text in a cell. Select this checkbox to replace only those cells where the cell contents exactly match with the content provided in the Replace (full text replacement). If you don’t select this checkbox, all cells where the text in content is used in partial or full will be replaced. By default, Match Entire Cell Content is deselected.

Note:
You can enter special characters (i.e. / . > , < ; : “ ) when using the Replace option.

Copy

Select the column you want to copy and click Copy to open the Copy page where you will create two new columns.

Description of fields on the Copy dialog page:

Source Column - Select the column you want to copy and click Copy to create two new columns. When you copy a column, the Source Column contains information from the column you copied. Enter the name of the new column in which you will copy the contents to.

Note:
All manipulations are available for newly copied columns.
Note:
Target Column - The Target Column contains information for the column you copy to.
Note:
Click Add New to add the new column as the first column in the Manipulations grid.

Create

Create a new column on the Create page.

Column Name - Enter the name of the new column.

Column Value - Enter the row value for the column.

Headers

Edit or create the heading names of the columns on the Headers dialog page. Manipulated Columns provides information on the name of the columns. Header Columns provides the name of the headers, which can be edited.

Manipulations

View data manipulations on the Manipulations page.

Order - Reverse the order of the columns.

Operation - The operations defined (i.e. Split, Join).

Join Operator - The join operators created when you performed the Join (underscore, dot, hyphen, or space).

Source Columns - The columns copied from when you performed the Copy.

Target Columns - The columns copied to when you performed the Copy.

Grid Action

Turn the row and column headers on or off.

Auto-Size Column Width - Automatically adjust the columns in the Raw Data and Manipulations Data grids to fit the contents.

The columns in the Manipulations grid may also be resized by dragging them to the right as you would in an Excel spreadsheet. The resized column settings are saved when you click Next or Finish.

Define Data Mappings

Overview

Map Planful source data columns (Source Column) to target segments (Maps To). This mapping tells the system where and how your data will display in the columns once the load takes place.

You can map up to 100 columns. For example, if the number of Budget Input Years is 5, you may map as many as 60 columns. However, it is not necessary to map all the months for a scenario. You must map at least one month to load data.

Description of Fields for the Define Data Mappings Page

Source Columns/Map To

View Source Columns and select a Maps To option.

  • If Across Columns was selected for Time Mapping on the Define Overall Rule Setting page, monthly values will be provided in columns when the data is loaded through File, Copy Paste or Web Services. There are multiple measures that can be loaded while loading sales data.

  • If the Across Columns option was selected instead of individual Measures, the Measure Code is available in the Data Mappings in addition to the other columns. Measure Code is required in the source file.

  • Support Role (if not selected on the Define Overall Rule Settings page to be included in the data file) is added to the Data Mappings as a mandatory field.

Load Data

Overview

This is the final step in the Data Load Process. If there where issues with the load, an error log is created. If the Data Load was successful, a message indicating so will appear. It's important to load data in chronological order to avoid discrepancies and multiple loads.

Note:
When you load data values with numerous decimal places, the Planful application automatically rounds each value to 6 decimal places. To load an Excel file, save the file as a txt (tab delimited) type and then load.

Once data is loaded, a status page displays the target scenario and the load status with any exceptions. If data does not load properly, click the Exception Report button to view the Data Load Exception Report.

Note:
Compensation and Allocation data loads for a Forecast Scenario are applicable for open periods only based on the scenario definition.

Processing of Data Load Rules - Scenario Locking

Available for Shared Mode Applications

This applies to only those applications where Shared Mode is enabled. For information on what Shared Mode is, click here. If you are not sure if your application has Shared Mode, please contact Planful Support.

Overview

To prevent Data Load Rules and Consolidation Processes from running on the same scenario simultaneously, Scenario Lock functionality is in place.

When Data Load Rules and Consolidation Processes are executed, the first one to be initiated will acquire a lock at the Scenario level preventing the second process from executing on the same scenario until the first process is complete. For example, let's say a second process is initiated before the first one completes, the system will wait 10 minutes before canceling the second process. Note that information on status (canceled, completed and queued processes) is available in Cloud Scheduler. During the 10 minutes, the second process will automatically attempt to execute every 30 seconds to acquire a hold on the scenario. If the scenario lock is acquired by the second process (meaning that the first process completes within the 10 minutes) then it will execute and complete. If the scenario is not acquired by the second process within 10 minutes (meaning that the first process takes longer to complete), then it will be canceled and you can retry the process later.

Application Behavior

You can run Data Load Rules synchronously and/or asynchronously.

Behavior When Running Asynchronous Data Load Rules

  • Asynchronous Data Load Rules run in cloud and are executed via Cloud Scheduler.

  • Asynchronous Data Load Rules have a limit of one process per application. So when two Data Load Rules are executed by the same or different users (on the same or two different scenarios), they are queued in the Cloud Scheduler and executed one after another.

Behavior When Running Synchronous Data Load Rules

  • Synchronous Data Load Rules run instantaneously and you cannot continue to work in the application until completion.

  • You will not see entries in the Cloud scheduler. You will see entries in the Data Load History.

  • Two different users can run a Data Load Rule independently at the same time as long as each is run for a different scenario. If the same scenario, subsequent process will fail and result in an error.

Behavior When Running a Combination of Synchronous and Asynchronous Data Load Rules

  • You can run Data Load Rules asynchronously and synchronously within the same application. If an asynchronous is running and a synchronous Data Load Rule is initiated for a different scenario, both will be executed.

  • If an asynchronous Data Load Rule is running and a synchronous Data Load Rule is initiated for the same scenario, the synchronous Data Load Rule will fail immediately.

  • If a synchronous Data Load Rule is running and an asynchronous Data Load Rule is initiated for the same scenario, the asynchronous Data Load Rule will be queued in the Cloud Scheduler (the Status will be ‘running’ while waiting). The wait time is ten minutes, after which the application cancels the queued process.

Clear Data

Overview

Access Clear Data by navigating to Maintenance > DLR > Clear Data. Clear Data allows you to delete financial data posted by the Consolidation process or loaded using Data Load Rules.

We recommend that you clear data for a Data Load Rule within the Data Load Rule using the Clear Data Combination functionality shown below. If you use the Clear Data page to clear the data loaded in the Data Load Rule AND you select the Enable new Data Calculations option in the Data Load Rule, Clear Data will override Data Calculations.

Clear Data works for Actual scenario data. Clear Data does not work for budget modules when data load rules are in shared mode.

To delete other types of data, use the related application pages. For example, to delete budget data, access budget templates and delete the data.

The cube must be refreshed after a Clear Data is performed. The details of Clear Data are recorded in the Audit Log.

Dependent Data

Dependent data is not always deleted when Clear Data is performed. Dependent data includes data posted in one area of the application and referenced in another. For example, data referred to by more than one scenario and associated templates. Dependent data also includes data used in calculations. For example, data posted to local currency of a legal entity and to the common currency via the Currency Conversion process.

It is important that you verify the deletion of dependent data and manually delete necessary dependencies not deleted by Clear Data. After necessary data is deleted, run appropriate processes to recalculate referenced data. For example, rerun the Currency Conversion process, refresh history data in budget scenarios, and so on.

Future Period Data

When data for a period is deleted, MTD/YTD calculations for future periods is also deleted to ensure data accuracy. For example, if you load data periods January through May 2018 and data for January 2018 is deleted, MTD/YTD calculations for periods February through May 2018 will be inaccurate. It is a best practice to reload your data using the correct future periods.

Note:
There is no option to retrieve cleared data.

Description of Fields for Clear Data

Preview

Retrieves data based on selection criteria. This function allows you to preview queried data before performing the delete. If the result of the data queried is high volume (i.e. 100 columns or hundreds of thousands of records), the data set is automatically exported and not displayed on the screen. You will receive a message when the data volume is too high for screen output.

Delete

Deletes retrieved data. You may only delete data for the Data Load Rules you have access to. You must have access to the Consolidation Process to delete Consolidation data. An example of retrieved data is shown below.

Export Clear Data Files

After you can have deleted the files, you can export the deleted files from the confirmation message you would see on the screen. When you export the Clear Data file, the downloaded ZIP file will have the CSV files for each quarter in the selected period for Clear Data. Following is the screenshot of the downloaded zip file which contains the Clear Data CSV files of a scenario from Nov 2016.

The data is cleared sequentially from the bottom (last quarter) of the selected time period to the top (first quarter) to avoid nulls or voids in the data.

See Also:

Data Load Rule Security

Consolidation Security

Selection Criteria

Toggle back and forth between the data retrieval page and selection criteria.

Provides a basis for data retrieval and deletion.

  • Scenario - the scenario from which you want to delete data. Scenarios must be unlocked.

  • Account/Company/Department/ICSegment - populated based on the segments defined on the Define Financial Segments configuration task page. Users must have Dimension Security configured to delete data.

  • Reporting - the highest level in a rollup, which contains children and calculation parameters.

  • From Period - the period from which you want to delete data. You may only delete data from unlocked periods. Future data will most likely be impacted.

  • Module - the module that contains posted data you want to delete. You might delete from Data Load Rules and the Consolidation module.

  • Template - the template you want to delete data from.

Note:
The Clear Data functionality in the Non-shared Mode includes two additional fields Module and Template under the Selection Criteria pane. These two fields are not available in Shared Mode.

More - Settings

Display

  • Label - a combination of code and name associated with hierarchical members.

  • Code

  • Name

Decimals - the number of decimals displayed for numerical values.

Remember my selections - retains selections.

Note:
There is no option to retrieve cleared data.

Use Cases and How Tos

Use Case on Data Load Rule File Load Level Based Approach

In this use case, Department dimensions are loaded from an Excel file in a level based hierarchical format using a Data Load Rule. The source data is shown below.

  1. Navigate to Maintenance > DLR > Data Load Rules.

  2. Click the New Data Load Rule tab.

  3. Select Load Type select File.

  4. For Load Item, select Segment Hierarchies.

  5. For Load Sub Item, select Department

  6. Click Next, or click the Select Sample Input File page.

  7. For File Type, select .xls.

  8. There is one header row so enter a 1 in the Row Containing Header field (shown in image below). For Number of Rows to Skip, enter 1 (also shown below). For Source Fields, enter 4 as there are 4 columns in the .xls file that are being loaded.

  9. Click Choose File (under File Information) and select the .xls file shown at the beginning of this use case.

  10. Click Next.

  11. For Build, select Level. There are 2 levels in this hierarchy, select 2 in theMax Number of Levels drop-down list. Then, click Next.

  12. Click the Manipulate Input File tab or click Next. This is where you can manipulate the file or format of the data from which you are loading data.

  13. Click Define Data Mappings or click Next. Map Planful source data columns (Source Column) to target segments (Maps To). This mapping tells the system where and how your data will display in the columns once the load takes place.

  14. Click Load Data or Click Next. This is the final step in the Data Load Process. If there were issues with the load, an error log is created. If the Data Load was successful, a message indicating so will appear. Click Finish.

Here is the data in the Segment Hierarchies screen:

How to Load Zeros Through Data Load Rules to Reflect in a Template

Loading zeros through the Data Load Rules will not reflect 0 in the template. If you want data in a template to display as a zero (0) versus a dash (-) in a template, load 0.000001 (1 as sixth decimal) versus 0 via Data Load Rules. The Data Load Rule will indicate data was successfully loaded and the values will be shown when the template is opened.

Note:
This functionality behaves as it does because 'zero' is not recognized as a value.

The values is a template can be zeroed manually as well.

NetSuite Integration

To complete the NetSuite integration with Planful, you must first complete a one-time setup, which consists of 3 steps.

  1. Configuration in NetSuite

  2. Installation of the SuiteScript Bundle

  3. Configuration in Planful

    Complete steps 4-6 once steps 1-3 are complete. The steps below are not a one-time activity.

  4. Creation of Saved Searches in NetSuite

  5. Creation of a Data Load Rule to Load Data Resulting from the NetSuite Saved Search

  6. Creation of a Cloud Scheduler Process

Configuration in NetSuite

To complete the Configuration in NetSuite step, complete 3 tasks.

  1. Enable Features (i.e. Token Based Authentication)

  2. Create an Integration Application

  3. Generate an Access Token

Enable Features

Token based authentication between Planful and NetSuite is used to secure your connection between the two applications 

  1. Navigate to Setup > Company > Enable Features.
  2. The last tab is SuiteCloud. Click it.
  3. Scroll down to the SuiteScript section.
  4. Select the SERVER SUITESCRIPT checkbox.
  5. Scroll to Manage Authentication and select the TOKEN-BASED AUTHENTICATION checkbox and save.

Create an Integration Application

Completion of this task will result in generation of a Consumer Key and Consumer Secret. The screen will be displayed only once. Make note of the content displayed as it is to be used while setting up NetSuite Connect in Planful. Tip! Copy and paste the Consumer Key and Secret into Notepad.

  1. Navigate to Setup > Integration > Manage Integrations > New.
  2. Fill in the name and select Enabled for State and Token-Based Authentication.
  3. Click Save.
  4. The Confirmation page appears. Copy and paste the Consumer Key and Secret into Notepad or some other application.

Generate an Access Token

Completion of this task will result in the generation of a Token ID and Token Secret.

If you are an Admin user, complete the steps under Admin User. If not, complete the steps under Role Level.

Note:
If the token is generated for a user with an Administrator role, that Administrator must create the token. This step cannot be performed by any other user or Administrator. If the token is generated for a user with a role other than Administrator, this step can be performed by any user or Administrator.

Admin User

  1. Navigate to Home > Settings > Manage Access Tokens.
  2. Click New My Access Token.
  3. Select the Application Name and click Save.

Role Level

  1. Navigate to Setup > Users/Roles > Manage Roles.
  2. Edit the Role mapped to the user by clicking on it.
  3. When the Role page appears, click Edit.
  4. Scroll down to Permissions.
  5. Click the Setup tab and add the Permission of User Access Tokens and click Save.

Generate the Access Token

  1. Navigate to Setup > User/Roles > Access Tokens > New.
  2. Select the Application Name that you provided in task 2. Select a User and Role. Click Save.

The screen below will be displayed only once. Make note of the information on this screen as it will be used while setting up NetSuite Connect in Planful. Tip! Copy and paste this information into Notepad.

Install the Bundle for SuiteScript in NetSuite

  1. Navigate to Customization > SuiteBundler > Search & Install Bundles.
  2. Find Planful NetSuite Connect and select it.
  3. Click Install as shown on the screen below.
  4. Once you click Install, the page below is displayed, click Install Bundle.
  5. Find Planful_NetSuiteConnect within the NetSuite browser and click Page:Planful_NetSuiteConnect.
  6. Click View.

Make note of the External URL as this information is used while setting up NetSuite Connect in Planful. Tip! Copy and paste this information into Notepad.

Configuration in Planful

  1. Open Planful and navigate to Maintenance > Admin > Configuration Tasks > Cloud Services
  2. Click the NetSuite Connect tab as shown below.
  3. Click Add to create a NetSuite Profile. You need to create a profile to establish the connection between Planful and NetSuite. Data can be pulled from multiple NetSuite databases. You can create multiple profiles corresponding to each database. The Configure Profile screen appears.
  4. Enter a name for the profile.
  5. Enter the Token ID and the Token Secret generated and provided by NetSuite. Tokens are used to authenticate and authorize data to transfer over the internet.
  6. For Consumer Key and Consumer Secret, NetSuite provided this information. Consumer Key is the API key associated with NetSuite. It identifies the client.
  7. The RESTlet url is the External URL. A Router is a RESTlet that helps to associate a URI to the RESTlet or the Resource that will handle all requests made to this URI. In plain terms this means that the url provides a route from Planful to NetSuite.
  8. Enter your NetSuite Account ID in the Account ID field.
  9. From the Signature Type drop down, select an appropriate option. You can use HMAC-SHA256 signature type to connect withthe latest NetSuite version.
  10. Click Save. Your configuration will look something like the following image.


The Profile is created when all the details provided in the screen above are valid. Once the Profile is saved successfully, the NetSuite user will receive an email. The subject of the email will say, “A User Error Has Occurred”. Please ignore this email. This email will be sent using the user’s email and it will show up in the Sent folder.

To further explain, when you configure the NetSuite Profile in Planful, Planful sends the user credentials along with the Saved Search ID as ‘0’ and only upon receipt of the response (that there exists no Saved Search with the ID ‘0’) are the user credentials validated. This email will be sent every time a Profile is created, or an existing Profile is re-saved.

Create a Saved Search in NetSuite

Create a Saved Search to extract data from NetSuite that you want to load to Planful.

  1. Navigate to Reports > Saved Searches > All Saved Searches.

  2. Click to view All Saved Searches or click New to create another.

  3. Select a Search Type. This is important because the field/column types that will prepopulate the saved search is based onSearch Type. For example, if you select the Contact type, the saved search will prepopulate with Name, Email, Phone, etc in the Results column.

  4. Create the Saved Search with the required columns that can be used to map to the columns in Planful while loading data.

NetSuite Saved Search Best Practices

  • For GL Data and Transactions Data, only a Date type of column can be mapped to a ‘Date’ target column in Define Data Mapping step in Planful.

    • The date can be picked from the ‘Transaction Date’ in NetSuite.

    • To use Posting Period to map to date column in Planful, use the formula below of type Date in the Saved Search.

    • to_date(ConCat('01 ', {postingperiod}), 'dd.mm.yyyy') Any other field of type date can be mapped in Planful.

      Note:
      When loading data to Planful using a given date range, the records with empty date values will not be loaded. This may happen when the NetSuite transactions have empty or null custom date fields which are used in the Saved Search and the custom date field being mapped in Planful.
  • If any of the segment members are blank in NetSuite transactions, for example, Location, then the formula below can be used to populate Null with default value. The below formula uses DEFAULT as a placeholder for the default value.

    NVL({location.namenohierarchy}, 'DEFAULT')

RESTlet Limitations

The following record types are not supported:

Record CategoryRecord Type

Customization

CRM Custom Field (definition)

Customization

Custom Record Custom Field (definition)

Customization

Entity Custom Field (definition)

Customization

Item Custom Field (definition)

Customization

Item Options Custom Field (definition)

Customization

Other Custom Field (definition)

Customization

Transaction Body Custom Field (definition)

Customization

Transaction Column Custom Field (definition)

Customization

Custom Record Type (definition)

Entities

Group

Entities

Win/Loss Reason

Lists

Budget Category

Lists

Currency Rate

Marketing

Campaign Keyword

Other Lists

Lead Source

Other Lists

Win/Loss Reason

Support

Support Case Issue

Support

Support Case Origin

Support

Support Case Priority

Support

Support Case Status

Support

Support Case Type

Support

Support Issue Type

Transactions

Budget

Transactions

Intercompany Transfer Order

Website

Site Category

Creating a Data Load Rule to Load Data Resulting from the NetSuite Saved Search

Now that configuration is complete, create a Data Load Rule (DLR) to load NetSuite Data via the Saved Search. Follow the steps below.

  1. Navigate to Maintenance > DLR > Data Load Rules.

  2. Click the New Data Load Rule tab.

  3. Enter a name for the DLR.

  4. For Load Type, select NetSuite Connect as shown below.

  5. Select to load Segment Hierarchies, Currency Exchange Rates, GL Data, Translations Data or Transaction Data. These are the only supported load type items at this time.

  6. For Connection Profile, select the name of the profile configured above. In this case it was called NS User. g. 

  7. Select a Load Sub Item and click Next.

  8. Row with Header and # of Rows to Skip are read-only. Specify the # of Source Fields with the number of columns to be mapped in the Define Data Mappings step from the NetSuite Saved Search.

  9. Enter the Saved Search ID or alternatively use the browse icon to select the NetSuite Saved Search in the ‘Select Sample Input File’ step.

  10. For the Manipulate Input File step, you can perform data manipulations, which allow you to change the way data is loaded from your source solution to Planful(the target). You can perform the following actions:

    • Add new column

    • Duplicate existing column

    • Edit Headers Name

    • Join Data

    • Split Data

    • Replace

    • Show Column Header

    • Manage Manipulations

      Note:
      The Data Manipulations functionality is enabled only for the RESTlet mode and not applicable for the Web Services mode.
  11. You can manipulate the data displayed in the Raw Data pane The manipulated data is displayed in the Manipulated Data pane.

  12. For the Define Data Mappings step, map the Saved Search Source Column to the column in Planful using the Maps To column. In the screenshot below, the GL Data or Transactions Data Load shows you where you can filter the data to be loaded at the segment level using a Filter Condition of ‘Include/Exclude’ and segment codes as Filter Values with comma separated.

    The NetSuite columns are displayed under the Source Column field in the Define Data Mappings page. The number of source columns is based on the value of # of Source fields in the Select Sample Input File step.

Note:
Level-based data load is enabled for the NetSuite Connect DLR in the Define Overall Rule Settings step. Also, the YTD (Year-toDate) and Account Based options are enabled in the Data Type drop-down list for NetSuite Connect DLR.

Workaround for Existing DLR

The Data Manipulations functionality is applicable only to the NetSuite Connect DLR’s that are created after June 2019 Release and in the RESTlet mode.

Note:
This workaround is not applicable for the Web Service mode DLR’s.

For customers using the RESTlet mode prior to June 2019 Release or for customers who migrated to RESTlet mode from the Web Services mode, the workaround steps to enable the Data Manipulations functionality are as follows:

  1. From the Load Type drop-down list, select any load type other than NetSuite Connect. For more information, see the Data Integration Admin Guide.

  2. From the Load Type drop-down list, select NetSuite Connect.

When you navigate to the Manipulate Input File page, you can view the raw data and can perform the required Data Manipulations.

When you navigate to the Define Data Mappings page, you can view columns from NetSuite under the Source Column field.

Note:
When you copy a NetSuite DLR created prior to June 2019, the Data Manipulations functionality is enabled for the copied DLR only after you perform the workaround steps.
Note:
When you select any value other than NetSuite Connect in the Load Type drop-down list, all the configurations will be reset and you should set up the DLR again.

Best Practices

  • Navigate to the Manipulate Input File step prior to the Define Data Mappings step to fetch the latest column header information and to preview the saved search data.

  • When the Process Flow is running, you should not navigate to the Manipulate Input File page of the DLR used in the Process flow.

Create a Cloud Scheduler Process in Planful

It is recommended that you create a Cloud Scheduler Process in Planful to run the DLR. To do so, follow the steps below.

  1. Navigate to Maintenance > Admin > Cloud Scheduler > Process Flow.

  2. Create a Process Flow and add tasks with the Data Load Rule task type.

  3. Select the DLR just created and schedule it to run. Below is a screenshot of a Scheduled Process for GL Data where the Period selected is Current Financial Year.

For detailed information on how to set up processes in Cloud Scheduler, see the Cloud Scheduler guide.

Data Calculation Exceptions

Overview

This functionality instructs the system, during the data load process, to take default values for missing data for periods where no entries are posted. Access Data Calculations by navigating to Maintenance > DLR > Data Calculation Exceptions.

Behavior When Data Calculation Exceptions are Not Defined

By default, if data is not provided for a particular period, Planful makes certain assumptions during the data load process when loading data from your source system to Planful.

In the example below, values are posted for January, February, and April, but there are no posted values for March or May in the source system. When this is the case, Planful assumes that the account has no balance, which results in a zero year-to-date balance. The system also reverses the balance showing each balance as activity for the period for the month-to-date value. This is why, in the example below, March is -20 and May is -10.

CompanyDeptAcctProductProjectIntercompanyJan-15Feb-15Mar-15Apr-15

2100

default1

3320

Default

Default

Default

555

555

555

555

2120

default1

3320

Default

Default

Default

212

212

212

212

2140

default1

3320

Default

Default

Default

214

214

214

214

Behavior When Data Calculation Exceptions are Defined

Lets say you don't want the system to behave as it does by default. You want the system to assume account values based on account type and the type of data you load from your source system. In order to make this happen, you must define data calculation exceptions.

For example, you have an income statement account and a sales return is posted to the account for January. But, in February, nothing is posted to the account. The account is not zero; there was just no activity for the month of February and you want the account to show the balance from the previous month (January). To make this happen, define a data calculation exception.

In the Income Statement example below, there is no activity to the account for May through December, but a data calculation exception is defined so that the balance is carried forward from May onward.

Assume MTD Activity = 0

Source

Load Type

Account

Measure

Jan

Feb

Mar

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Dec

DLR

MTD

Flow

MTD

10

10

0

10

0

0

0

0

0

0

0

0

 

 

 

YTD

10

20

20

30

30

30

30

30

30

30

30

30

In the Balance Sheet account example below, you want the bank account to post balances for the account each month even when there is a zero balance for a period.

Assume YTD Activity = 0

Source

Load Type

Account

Measure

Jan

Feb

Mar

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Dec

DLR

MTD

Flow

MTD

10

10

-20

10

-10

 

 

 

 

 

 

 

 

 

 

YTD

10

20

0

10

0

0

0

0

0

0

0

0

Configuring Data Calculations

The Data Calculations configuration setting instructs the system to take default values for missing data for periods where no entries are posted. Complete the Data Calculation configuration by following the steps below.

  1. Navigate to Maintenance > Admin > Configuration Tasks.

  2. Select the DLR Data Calculations configuration.

  3. Select to apply the calculation exceptions to all scenarios or selected scenarios.

  4. If All Scenarios is selected, select MTD or YTD for Flow Accounts and Balance Accounts. This setting is used to calculate and post values by Data Load Rules for periods where no data is posted.

  5. If All Scenarios is not selected, click Add to select scenarios you want to apply data calculations to. Then, select MTD or YTD for Flow and Balance Accounts.

Specifying Calculation Exceptions

Once configuration is complete, specify accounts to apply data calculation exceptions to. Access the Data Calculation Exceptions page by navigating to Maintenance > DLR > Data Calculation Exceptions.

The application page is divided into 2 panes; left and right. All accounts from your account hierarchy are listed on the left pane. All company dimensions from your company hierarchy are available on the right pane.

Applying Data Calculation Exceptions to Companies and Accounts

  1. In the right pane, click the Add button to enable the Add - Company with Data Calculation Exceptions page. On this page, select company dimension members you want data exceptions applied to based on the configuration. To select several members at once, hold down the shift key.

  2. Set MTD or YTD for Set Zero Default Value.

  3. Click Copy To to set the value of the account for the dimension. When the Copy To - Accounts page opens, select the accounts you want to apply the exception to. Click Copy.

  4. Click Save.

  5. Click Output. In the output below, you can see that the Copy To account selected is 1110 Cash and Marketable Securities. And, each of the company dimensions are listed with the associated zero default value selected.

How to Mass Upload DLR Data Calculation Exceptions

  1. Use the Output All feature to export data load calculation exceptions by account code to an Excel file.

  2. Once the file is exported, make the necessary adjustments and save the newly exported file to your local machine. You have two options:

    1. Remove the header row and upload the file.

    2. Leave the header row. This option will result in an error upon loading. The error occurs due to the header row, but the data will be loaded. For example, you load a file with 1 header row and 10 data rows. Only the header row fails because it has invalid. The other 10 data rows are uploaded.

  3. Instead of selecting each line clicking the Copy To option and selecting the accounts to copy the exception to, click Upload file . This action uploads the file with the calculation exception adjustments that you just saved to your local machine.

  4. Select the saved file to load, by clicking the Choose File button.

  5. Click Submit to perform the upload. A message will appear indicating the upload is processing and you can check the status in the Detail Log. A message will appear as follows:

Verify Data

Overview

After loading data using Data Integration functionality, access the Verify Data page to confirm the accuracy of the postings and verify where data has been sourced from. To access Verify Data, navigate to Maintenance > Reports > Verify Data.

Note:
Posting records are the lowest level members of a hierarchy.

Description of Fields on the Verify Data Page

Selection Criteria

View the grid only. Click again to view the dimensions, hierarchy and grid.

Display

Select Code, Name or Label associated with the dimension parameters. Selecting Label displays the code and the name. Scenario

Select the Scenario associated with the data loaded.

Dimensions

The dimensions associated with the selected scenario. Select the relevant member from the hierarchy. If sourcing data from Actuals, select a member from the Time dimension.

Hierarchy

Select the entire hierarchy by clicking the top level. Expand and collapse the tree structure or perform a search for a specific members to associate with the dimension. You can select multiple members to associate.

Preview

Once you’ve selected dimension parameters, click to display all records. The Data Preview screen appears.

The following fields are enabled under the More list-box:

Grid On/Off - Show or hide grid lines.

Freeze On/Off - freeze or unfreeze the grid.

Decimals - Display 0 to 6 decimal places for selected cells. The default is set to 2 decimal places.

Show rows - Represents the record numbers displayed on the grid.

Go to - Enter the page number of the page you want to view.

Auditing Data Loads

Overview

Planful provides different levels of user auditing for a variety of Data Integration tasks as well as other tasks. Auditing options are:

  • View Audit Log - select areas of the application which you would like to view the actions taken by a selected user.

  • Data Load History - view successful and unsuccessful Data Integration - Data Load Rule processes.

Data Load History

View successful and unsuccessful Data Integration - Data Load Rule processes. Access the Data Load History page by navigating to Maintenance > Audit > Data Load History.

Data Load Rule

Select the Data Load Rule for which you would like to view report log information.

Type

Select to view audit log data for all data loads, those that failed, or those that were successful.

From / To

Select From and To period dates to view related report log details.

View History

Generates results based on selection criteria.

Close

Close the Selection Criteria pane.

Detailed Report

View the Data Load History Report, which contains detailed information, such as the name of the loaded file, the number of records loaded, the data type, and time mapping information.

Click Exception Report to view the Data Load Exception Report, which contains information on data loads that were not successful. 

Grid

The following fields in the grid are described.

Start Time - Displays the time at which the Data Load Rule process started.

End Time - Displays the time at which the Data Load Rule process was completed.

Status - Indicates whether the data load was successful or not.

Description - Displays details about the execution of the Data Load Rule process.

Record Count - Displays the number of records loaded in the Data Load Rule.

Run Id - Displays a unique key associated with the Data Load Rule. You can use the Run Id when you contact the Support team for Data Load Rule related issues.

User - The user that performed the data load.

Status Message - Displays the name of the data load rule. This column is no longer be available in the grid. However, this column is available in the exported or printed Data Load History report.

How to Verify Data was Loaded to Planful

After loading data using Data Integration functionality, it's important to confirm the accuracy of the postings and verify where data has been sourced from. To verify loaded data, complete the following steps:

  1. Navigate to Maintenance, Reports, and click Verify Data.

  2. Select a scenario.

  3. Click the magnifying glass to select dimension parameters.

  4. Click the Preview button.

  5. Review data records via the spreadsheet that is opened once you click Execute

Opt In Features

Data Load Missing Segment Member - Opt In Feature

This feature, available by request, eliminates data load failures due to missing segment members. Missing segments represent those that are not in Planful, but exist in the data files loaded to Planful. For example, during a web service file load, if Planful encounters segment members that are not in the system, the load will fail. However, you may now take advantage of this feature, which adds missing segment members automatically. Please contact Planful Support if you would like to take advantage of this feature.

Note:
This feature works for GL Data Load types only. It works with all forms of data loads (web services, file loads) except for copy/paste.

How it Works

Let's say segment member Account X exists in a data file you are loading to Planful. However, Account X is considered "missing" because it does not exist in Planful. With this feature, the system (Planful) will add Account X with default properties. Upon completion (of the data load), you will receive an email indicating which segment members where created/added. At this point, you may modify the default properties of the added Account X as well as position it properly within the tree structure. Then, rerun the Data Load Rule so that all data loads as intended.

To ensure that the account properties can be changed enable “Enforce changing the account properties when the data is available” on the Maintenance > Security Administration page.

See Also: Opt-In Features and Functions

Data Load Cloud Processing - Opt In Feature

This functionality allows all GL Data and Translation data loads (set up using Data Load Rule functionality) to be automatically updated and processed in the cloud. If you’re interested in this functionality, please contact Planful Support.

See Also: Opt-In Features and Functions

Data Load Limitations and Error Resolution

Limitations when Loading Data

  • GL Data and Transaction data load through the Planful Data Load Rules interface (copy / paste option ) -15K records with Across Columns in Select Sample Input File page.

  • GL Data and Transaction data load through the Planful Data Load Rules interface (File Load) – suggested limit is a file size of 10 MB (approximately 110K records).

  • GL Data and Transaction data load through Web Services - suggested limit is a file size of 40 MB (approximately 400K records)

  • Employee compensation type data loads – suggested limit is approximately 15K records.

  • If you have requirements that exceed the limitations listed above, please contact Planful Support.

Troubleshooting Data Load Errors

Data Load Error : Loaded Time 'Jan-19' is not matching with the scenario '2019 1+11 Forecast Fiscal Year '2019'

This data load error occurs when a user attempts to load data sourced from the wrong scenario. In this case, the user is loading data to

Planful via a Data Load Rule to a forecast scenario where the first period is Jan-19. And, the user has set the Forecast to begin with Feb19.

The Jan-19 period is being sourced from another scenario (Actuals by default) and no longer allow data to be loaded to this period in this scenario. If the user wants to load data to this period, it will have to be done in the source scenario. Then, actuals should be refreshed in the forecast to move that data into the scenario.

This will be the case with future scenarios as the user moves the forecast period forward.


Was this article helpful?

What's Next