Data Load Rules
  • 52 Minutes to read
  • Dark
    Light
  • PDF

Data Load Rules

  • Dark
    Light
  • PDF

Article summary

The Planful suite includes built-in data integration functionality, which allows you to import your data.

A Data Load Rule (DLR) tells the Planful system how to handle data values in the data source during a data load. You can create data load rules to load files, copy and paste data, and use web services to load segment hierarchies, attributes, attribute hierarchies, entity hierarchies, users, Workforce data, currency exchanges rate and other data. There are several steps to create a data load rule. These steps vary based on source and target data options.

The first page you will see when you access the Data Load Rules page is the Data Load Rule List. Click the New Data Load Rule page and the Data Integration wizard will step you through the remainder of the process.

Types of Data Loads

The following methods for data import are available:

Spreadsheet Based Data Load

Using this approach, your source data is first exported to Microsoft Excel and then copied on to the Planful browser-based spreadsheet interface through the copy/paste operation.

This approach is user-friendly and most commonly used, however, when the data volume is huge (i.e. above 60,000 records per data load), alternative methods are recommended.

Note:
When using Excel Files ensure all numerical data is formatted as Number. Data formatted as Accounting and other formats is not supported.

File Based Data Load

Using this approach, your source data is first exported into a text /excel/csv file format. Then, through load mapping settings, load file setup is configured in the application. Lastly, the source file is setup to upload to the Planful application database.

If you set up a File based data load rule and later on want to load data via Web Services, instead of re-creating the entire DLR (Data Load Rule), you can simply edit it and change the Load Type to Web Services. This flexibility saves time and ensures consistency in loading data when the same DLR is used. However, it is very important and a best practice to complete all the steps in the DLR to effectively change the DLR type. Otherwise, the DLR will be rendered unusable.

You can switch between the types of DLRs:

  • File Load
  • Web Services
  • Copy – Paste
  • NetSuite Connect

For example, to change the load type from File to Web Services, click the DLR. Change the Load Type to Web Services and complete the remaining steps of Select Sample Input File, Define Overall Rule Settings, Manipulate Input File, Define Data mappings and Load Data for the changes to take effect.

Data Load Rules (DLRs) are automatically sorted alphanumerically. Sorting alphanumerically allows you to more quickly identify a Data Load Rule without scrolling through the lists.

If you sort on any column on the Data Load Rule List page, navigate away, and then return to the same page, the sorting is retained.

Note:
The orange carrot displayed in any of the columns denotes that the user has sorted the DLR list on that column.

Template Based Data Load

Using this approach, data is loaded into the Planful suite via data input templates in which data is manually entered. Administrative users can design and publish the data input templates, which are then accessed by end users who input data into the templates.

This approach is recommended when the data needs to be loaded into the Planful suite from several disparate General Ledgers with significant dissimilarities in the chart of accounts and the data translation mappings from the subsidiary general ledger data to the common reporting chart of accounts do not exist or would be hard to create.

Web/Cloud Services

See the Admin Guide to Loading Data to Planful

High Level Steps to Create a Copy-Paste DLR to Load Data

  1. Access the Data Load Rules page by navigating to Maintenance >  Data Integration > Data Load Rules.
  2. Click the New Data Load Rule tab.
  3. Enter Name.
  4. Select Copy - Paste as the Load Type on the New Data Load Rule page. 
  5. Select Load Item and Load Sub Item from the drop-down.
  6. Copy and paste data from your source file to the spreadsheet on the Select Sample Input File page.
  7. Select parameters, defaults, and segment information to further define the way you want to load data using the Define Overall RuleSettings page.
  8. Transform/manipulate the data you are loading to best suit your needs on the Manipulate Input File page.
  9. Map source data columns to targets using the Define Data Mappings page.
  10. Perform the load using the Load Data page.
Note:
You can paste up to a maximum of 1000 rows and 70 columns.

High Level Steps Create a File DLR to Load Data

  1. Access the Data Load Rules page by navigating to Maintenance >  Data Integration > Data Load Rules.
  2. Click the New Data Load Rule tab.
  3. Enter Name.
  4. Select File Load as the Load Type on the New Data Load Rule page.
  5. Select Load Item and Load Sub Item from the drop-down.
  6. Load the source file using the Select Sample Input File page.
  7. Select parameters, defaults, and segment information to further define the way you want to load data using the Define Overall RuleSettings page.
  8. Transform/manipulate the data you are loading to best suit your needs on the Manipulate Input File page.
  9. Map source data columns to targets using the Define Data Mappings page.
  10. Perform the load using the Load Data page.
Note:
To load a Text file, save the file as a txt (tab delimited) type and then load.

High Level Steps to Create a Web Services DLR to Load Data

  1. Access the Data Load Rules page by navigating to Maintenance >  Data Integration > Data Load Rules.
  2. Click the New Data Load Rule tab.
  3. Enter Name.
  4. Click the New Data Load Rule tab.
  5. Select Web Services as the Load Type on the New Data Load Rule page.
  6. Select Load Item and Load Sub Item from the drop-down.
  7. Use an external Data Integration tool (like Boomi) to load data to the Planful system.
  8. The steps will vary based on the external Data Integration tool used.

Steps to Convert a DLR Load Type

When you set up a Data Load Rule (DLR), you select how you want your source data (such as actuals) loaded to Planful. For example, you might load it in a file format or copy and paste the data. You might also use Boomi Connectors (Web Services). Let’s say you generally use the File load type to load data to Planful from your source system. But, you’ve decided to use connectors via Web Services to load your data. Instead of re-creating the entire DLR (Data Load Rule), you can simply edit it and change the Load Type. This flexibility saves time and ensures consistency in loading data when the same DLR is used.

You can switch between the types of DLRs:

  1. File Load
  2. Web Services
  3. Copy – Paste
  4. NetSuite Connect

Best Practices:

Complete all the steps in the DLR to effectively change the DLR type. Otherwise, the DLR will be rendered unusable.

How to Convert the Load Type from File to Web Services?

  1. Navigate to Maintenance > Data Integration > Data Load Rules.
  2. Select a DLR and double-click it
  3. Change the Load Type to Web Services and complete the remaining steps of Select Sample Input File, Define Overall Rule Settings, Manipulate Input File, Define Data Mappings and Load Data for the changes to take effect.

    Note:

    To change the DLR back to File Load for Load Type:

    1. Double-click the DLR from the Data Load Rule List page.
    2. On the New Data Load Rule page, change the Load Type to File Load and complete the remaining steps of the DLR.

How to Convert a DLR with a Load Type of File, Copy Paste, or Web Services to NetSuite Connect?

To change the Load Type from Web Services to another, follow the steps below:

  1. Access the Data Load Rules pages by navigating to Maintenance > Data Integration > Data Load Rules.
  2. Double-click a DLR on the Data Load Rule List page. The Define New Data Load Rule page opens.
  3. Select the Load Type as Netsuite Connect
  4. Complete the remaining steps from the Select Sample Input File to the Load Data step to use the DLR.

To use NetSuite Connect there are required setup steps. Click here to learn all about NetSuite Connect and how to perform the setup.

How to Convert a NetSuite Connect DLR Load Type to File Load, Web Services, or Copy Paste

To change the Load Type from NetSuite Connect to another, follow the steps below:

  1. Access the Data Load Rules pages by navigating to Maintenance > Data Integration > Data Load Rules.
  2. Select the NetSuite Connec- Load Type DLR and click the Usage Report icon from the top menu bar to find out if the DLR is being used in any Process Flow. 
  3. If the DLR is being used in a Process Flow, edit the Process Flow and remove the Task with that DLR. If there is only one Task in the Process Flow, then you will have to delete the Process Flow to continue changing the DLR type.
  4. Return to the Data Rule Page and double-click a DLR on the Data Load Rule List page. The Define New Data Load Rule opens.
  5. Change the Load Type as needed.
  6. Complete the remaining steps from the Select Sample Input File to the Load Data step to use the DLR.

High Level Steps to Load Alternate Hierarchy Data Using an XLSX Source File

Note:
Before beginning a data load, the alternate leaf member must already exist in the main hierarchy.

To load data into an alternate hierarchy:

  1. Navigate to Maintenance > Data Integration > Data Load Rules.
  2. Click New Data Load Rule.
  3. Select any File Load load type from the drop-down.
  4.  Select Segment Hierarchies for Load Item and any segment for Load Sub Item.
  5. Click Select Sample Input File.
  6. Click the Choose File button for the Sample Data File under the File Information section and then click Next or Define Overall Rule Settings.
  7. Select Alternate Hierarchy from the Hierarchy Type section. If you set up an elimination company in Hierarchy Management and selected Yes for Elimination, there will be an option to tell the application to obtain a value for the elimination company from the file that contains the data you are loading to Planful.
  8. Select Yes for Include in Data File for the Elim Attribute to obtain the value from the data you are loading to Planful. This Parameter is available for Segment Hierarchy and Alternate Hierarchy data loads.
    Note:
    Elimination companies in the main company hierarchy do not require a new code for the alternate hierarchies.
  9. Define the remaining parameters for the data load (e.g., input file manipulation, data mappings).
  10. Click Load Data and then click Finish. The DLR for the alternate hierarchy is submitted for processing. A notification will be sent once processing is complete and the data is in the alternate hierarchy.
Note:
The following special characters are restricted from use in the Leaf Code of a segment leaf member when creating segment hierarchy data or loading it to Planful: ‘ " ~ ` ! @ # $ ^ * ( ) + = [ ] { } \ | ; : < > , ? All other special characters documented in the Getting Started guide are supported.

How to Create a Data Load Rule to Process Workforce Planning Allocations?

Complete the steps below to create a data load rule to process allocations in Workforce Planning.

Prerequisites

  • Choose Allocation Method by Department when setting up employee allocations. For information on how to do so, see the
  • Understanding Employee Allocations topic in the Workforce Planning Admin Guide.
  • Employees should have previously been populated in a Workforce Planning scenario.
  • Create a file that defines the allocation by all segments in the Common Chart of Accounts.
  • Define the file to include all fiscal years based on the budget input years defined in Configuration Tasks.
  • Define the allocation percentages for each month ensuring that the percentages total 100% for each month within a given fiscal year.
  • Once loaded, the allocation can be viewed in the home department of the employee.

To create a Data Load Rule to Process Workforce Planning Allocations, follow the steps below:

  1. Define a data load rule with the following parameters:
    • Load Type: Use either File Load or Copy-Paste
    • Load Item: Choose Workforce Planning
    • Load Sub-Item: Choose Workforce – Employee Allocations
  2. Click Next.
  3. Select Sample Input File and complete the following fields:
    • Row Containing Header – Select 1
    • Number of Rows to Skip – Select 1
    • Number of Source Fields – Select 22, this is the number of columns in the file format for this Data Load Rule
    • Number Format/Thousand Separator – Select Comma(,)
    • Number Format/Decimal Separator – Select Dot(.)
    • Date Format – Select MM/DD/YYYY
    • Data Load – Copy and paste from an Excel spreadsheet the formatted fields to be used in the DLR
  4. Click Next.
  5. Define Overall Rule Settings: Define the scenario and time mapping to be used for this data load rule.
    • Scenario – Select the Scenario the data will be loaded into.
    • Allocation Basis – This is predefined based on the Define Workforce Planning Budgeting Criteria Task, in this Allocation by Department
    • Time Mapping – Select Across Columns which defines a column for each month that allocations are processed against.
  6. Click Next.
  7. Manipulate Input File: Confirm the columns align with the Source file.
  8. Click Next.
  9. Define Data Mappings: Select the correct field mappings to the columns identified in the source data file.
  10. Click Next.
  11. Confirm the scenario data is being loaded and click Finish. Use Verify Data to confirm data loads.

You will receive confirmation that data was loaded successfully, or any errors during the data load will be identified. If errors are found correct the errors in the source file and attempt to load again.

High Level Steps to Incrementally Load Workforce Data

To incrementally load Employee Workforce Data, complete the following steps:

  1. Navigate to Maintenance > Data Integration > Data Load Rules.
  2. Click New Data Load Rule.
  3. Select a Load Type for your new data load rule.
  4. Select Workforce Planning for the Load Item.
  5. Select Workforce - Employee Data as the Load Sub Item.
  6. Continue with the load process. New employees will be appended and existing employees will be updated.
Note:
There are no limits on the number of employee records that can be loaded at a time.

Loading SSO Users

You cannot load SSO Authentication users using Data Load Rules. If there are a few users, you can manually add them from the user interface. If there are too many users to manually add, load the users with email and password as Native Authentication. Then, submit a Support case and the Operations group will change the user Authentication mode from Native to SSO.

You can load SSO users with Planful APIs. After creating users with Data Load Rules or the APIs, the authentication type must be updated by the Planful Operations team.

Creating New Data Load Rules - Explanation of All Fields with Examples

Data Load Rule List

This application page contains all data load rules that have been created and modified.

The Status indicates whether the latest data load was completed successfully. Red indicates a failure, green means the load was successful, and a gray hyphen indicates that no data loads have taken place.

Sort the list by clicking a column header to display the sort icon. Sort the list from bottom-up or top-down.

Delete: Planful checks for data loaded to the Actual scenario. If data is loaded for a locked period, data deletion is not allowed. Unlock the period and then click this button to perform the deletion.

Once a data load rule is created, you can delete the rule without affecting the loaded data (with the exception of the GL Data type load). For example, if you created a rule to load Segment Hierarchies and then delete the rule after you load the segments and hierarchies, the rule will be removed from the application but the segments and hierarchies you loaded will remain. In some cases, it's a good idea to keep data load rules that you create for reuse purposes. For example, you can reuse a rule to mass load new members for a dimension or update properties.

Note:
To load a Text file, save the file as a txt (tab delimited) type and then load.

Clear Data: Delete all data loaded to date for different scenarios using the selected Data Load Rule.

Note:
This feature is applicable for GL Data, Translations Data, Workforce - Employee Data.

For the Workforce Employee Data, you will receive an alert message if you click the Clear Data button and data exists. The alert message indicates that employee data exists and if you proceed, all employee details and associated data will be deleted. Clear Data deletes data from whichever scenario is selected and saved in the Define Overall Rule Settings step of the data load. If employees are mapped to any other scenario, the data is retained in that scenario.

Copy: Copy a selected data load rule. When the Copy Dataload page is opened, select a new name for the rule you are copying data to, or allow the system to assign a default name.

Once the copy is complete, the system will inform you, and the DLR List is automatically updated to include the new rule.

The following settings will be copied:

  • General settings (name, description, etc.)
  • Sample input file format information
  • Overall rule settings
  • Metadata (data mappings and manipulations)
Note:
If you are loading actuals across fiscal years and rolling balances forward, the data must be loaded using the same Data Load Rule as the prior year.

New Data Load Rule

Access the New Data Load Rule page by navigating to Maintenance > Data Integration > Data Load Rules. Click the New Data Load Rule tab.

Load Type

Load Types represent the way in which you plan to load data to Planful. Select from the following options:

  • File Load – Load a file that exists outside of the Planful system. For .csv file types, each column must contain data that is separated by a comma. For .txt files, make sure each column contains data that is separated by a tab.
  • Copy-Paste – Copy and paste cells directly from a spreadsheet to the Planful system.
  • Web Services - Load data through an external Data Integration tool (like Boomi). If you create a Web Services DLR and want to change the DLR to a File Load type, you can do so. Make sure you complete all steps on Select Sample Input File, Define Overall
  • Rule Settings, Manipulate Input File, Define Data mappings and Load Data for the changes to take effect.
  • Cloud File Upload - Load data through Box. The difference between the File Load and Cloud File Load is that data for the
  • Cloud File Upload is loaded to Planful from a cloud-based service provider (i.e., Box) whereas data for a File Load is loaded from a non-cloud based source (i.e. hard drive).
  • Google Drive File Upload - Load data via Google Drive.
  • NetSuite Connect - Load data through a NetSuite integration.
  • FTP/SFTP - Load GL data, translations, and transaction data from the FTP/SFTP server.

Once a Cloud File Data Load Rule is configured, you can include the rule in a Cloud Scheduler task, job, and process flow. When the process flow is executed, the Planful system automatically connects to your Box account, looks for relevant data, and extracts that data to Planful. The data load is then categorized as a Success or Failure in Box. The status (successes and failures) of Data Load Rule cloud file uploads is recorded within the Detail Log accessed on the Job Managerpage. 

Best Practices - Specific to Box

  • Set the Automatic Cube Refresh at the Define Overall Rule Settings step in the Data Load Rule to No.
  • Set up Cloud Scheduler so that the Scenario process is dependent upon the Data Load Rule.
  • Process the scenario after loading the Box file and have a separate task for each in Cloud Scheduler.


Load Item

Select from the following load items:

  • Segment Hierarchies- Select specific segments to load in the Load Sub Item list box.
    Note:
    The following special characters are restricted from use in the Leaf Code of a segment leaf member when creating segment hierarchy data or loading it to Planful: ‘ " ~ ` ! @ # $ ^ * ( ) + = [ ] { } \ | ; : < > , ? All other special characters documented in the Getting Started guide are supported.
  • Data - Select detailed data (i.e. Sales Data, GL Data, etc.) in the Load Sub Item list box and load all associated information. 
  • Attributes - Load the values associated with the attribute. Once loaded, you can view the values on the Attribute Members List page.
    Note:
    When loading attributes, the leaf name AND code must be included.
  • Attribute Hierarchies - Load values for an attribute in an attribute hierarchy.
  • Entity Hierarchies- Load entity hierarchies, like the Budget Entity Hierarchy for example.
  • User Information- Load multiple users at once with all associated information.
    Note:
    Users can be added by using the User & Role Management option and APIs. To learn more about loading users to Planful, click here.
  • Workforce Data - Load Workforce Data, like employee data for example.
  • Currency Exchange Rates- Create a data load rule for currency exchange rates.
    Note:
    Currency conversion with currency exceptions are calculated when loading data via Data Load Rules. However their are limitations. When Currency Type is set to not translated (NT) or the exchange rate is not up-to-date for a currency type that is an exception, the conversion process does not result in the correct calculations. The workaround is to make sure you enter an exchange rate. If NT is required, create a currency type with NT (on the Currency Type page) AND enter zero (0) as the exchange rate (on the Exchange Rates page).

Load Sub Item

The data populated/available options are based on the selected Load Item.

Segment Hierarchies: Available segments are listed. The options available are populated based on system-delivered/defined segments and segments you create.

Data: When loading GL Data, Sales Data, Sales Price, Transaction Data, and Translations Data, it is best to perform a sequential load. For example, if you load data for October 2018 after loading data for November 2018 you will need to load October and November to calculate the MTD/QTD/YTD information correctly for reporting purposes.

When data is loaded, Planful automatically calculates either MTD or YTD values based on the data load rule setting.

YTD setting: MTD=YTD current period - YTD prior period

MTD setting: YTD=MTD current period + YTD prior period

Account Based: If account type is Flow then YTD=MTD current period + YTD prior period. If account type is Balance then MTD=YTD current period - YTD prior period.

Transaction Data V2: Transaction Data Version 2 provides fast and more efficient processing of the Data Load Rule. You can select the Transaction Data V2 option from the Load Sub Item drop-down list in the New Data Load Rule page. The Transaction Data Load Rule process is not changed only the processing speed has been increased.

Note:
You cannot roll back to Transactions Data version 1 once you start using version 2.



Attributes: Available attributes are listed. The options available are populated based on system-delivered/defined attributes and attributes you create using the Attributes page. 

Important!
When loading attributes, the leaf name AND code must be included.

Attribute Hierarchies: Available attribute hierarchies are listed. This list is populated based on the attribute hierarchies defined on the Attribute Hierarchy page.

Entity Hierarchies: Budget Entity Hierarchies contain entities based on attributes or dimensions. Select this option to load data to defined budget entities and associate the children of the hierarchy (lowest level members) created using the Budget Hierarchy Entity Setup page.

Workforce Planning: You can load employee data directly to a target scenario other than the Default scenario. Select the target scenario on the Overall Rule Settings page.

  • Workforce - Employee Data - Load employee position and attribute information or perform a complete load of employee information including compensations.
  • Workforce - Employee Compensation Item - Create a data load rule to update Salary (full-time or hourly wage type compensation items) and Direct Data Input compensation items in the Default Scenario. This functionality allows you to incrementally update Workforce data versus reloading all Workforce data.
Note:
Compensation Item Loads require that you load all compensation items for an employee.

The following table reflects the columns that are required for this type of data load and some example row data:

Employee #Plan Pos./Pos. Desc.Compensation CodeRateHourDefault ValueDescription

 

13400Default Pos

FT

0

0

70000

Full-time / Annualized employee

 

13400Default Pos

Commissions

0

0

100

Direct Data Input compensation item

 

13400Default Pos

PT

40

2000

0

Hourly employee

  • Workforce - Employee Allocations- Load employee allocations (allocation percentages) for all periods of each fiscal year (across cost centers). For example, if the start year is 2018 and the scenario covers a two year time period, the load should contain allocations for 2018 and 2019. Also, each allocation needs to total 100%. For example, if an employee expenses relate to two different departments, the compensation must allocate to 100% across both departments.

Currency Exchange Rates

  • Load Exchange Rates - Load exchange rate details (conversion calculations, currency types, etc.). Exchange rates are generally calculated based on the exchange rates entered on the Exchange Rate page. Exceptions are not taken into consideration when exchange rates are loaded.
  • Exchange Rates with Translations - This option allows you to load exchange rate data and translate the data from a local currency to a common currency. 

Translations provide a mechanism for organizations to map the data from the local chart of accounts of the reporting site (subsidiary or a partner company) with the common reporting chart of accounts used in the Planful suite.

Select Sample Input File

Select file requirements or the structure of the data you are loading. If you are performing a Copy Paste or Web Services load, this page allows you to specify parameters for incoming data. Different fields are available based on the Load Type selected on the New Data Load Rule page.

Note:
To load an Excel file, save the file as a txt (tab delimited) type and then load.

Let's say you loaded data using Data Load Rules with a File load type. Then, you want to add extra data columns, which you want reflected in the load. To make this modification and load data, change the number of columns and browse to the sample input file to change the structure. Lastly, map the new columns on the Define Data Mappings page before you perform the load.

If you are having any difficulties loading the sample data file, make sure the source file does not contain formulas linked to other files on your computer. Files with formula references are not supported.

The format of your data spreadsheet (i.e. Excel) must match the Select Sample Input file specifications.

File Requirements - These fields are available if you selected a File Load Type

  • File Type – Select the type of file you want to load; Excel (xls), text (txt) or csv (comma delimited value).
  • Column delimiter- Available if you select to load from a txt or csv file. A column delimiter separates each column with a tab, semicolon, or comma. xls files already contain column separators and therefore this option is not available.
  • Row Identifier- Specify the character that separates rows in the Data Load file. The row delimiter is important because the system needs to know when a row begins and ends. The identifier marks the beginning/ending of rows in your txt and csv files.
  • Text Qualifier - Specify the character you want the system to use to enclose values in the file. This function is important because it separates the field data from the delimiter. You may select to separate the two using single quotes or double quotes. If you choose none, there will be no separator to distinguish between the delimiter and field data.
Note:
The File Requirements / Information fields are opened for file loads only.

Header

  • Row Containing Header- Enter the number of the row that contains the header information. A header contains the heading names for a column.
  • Number of Rows to Skip- Enter the number of rows in your spreadsheet to skip. For example, if your spreadsheet has 20 rows, but the first 3 rows contain data that is not important you can enter 3 in this field. The system will skip the first 3 rows during the data load process.
  • Number of Source Fields - Enter a number based on the number of columns you plan to select to copy from your spreadsheet, txt file, or csv file. For example, if your source file contains 10 columns, enter 10 here. For Copy Paste loads, the graph automatically reflects the number you enter.

Number Format

Different countries use different ways to represent numbers. For example, 6 euros and 28 cents needs to be loaded as 6,28 whereas 6 dollars and 28 cents should load as 6.28. The Thousand and Decimal Separators allow you to specify the format in which you want to load your numerical data.

  • Thousand Separator - Select a period or comma to separate thousands. In general, non-USD based companies will set the thousand separator to period and the decimal separator to comma.
  • Decimal Separator - Select a period or comma to separate decimals. In general, USD based companies will set the thousand separator to comma and the decimal separator to period. This is the default setting.
Note:
Thousand Separator and Decimal Separator must be different. For example, you cannot select a comma for the Thousand Separator and a comma for the Decimal Separator.

Date Format

Select the format for the date to appear in the resulting load. Different countries use different ways of representing a day within a calendar year. For example, in Hungary the date is represented as Year/Month/Day. In the U.S., several organizations represent a date by

Month/Day/Year. Essentially, there are several ways in which you may choose to represent a date to best coincide with your organization's current practices.

File Information - Available for File Load Types

Sample Data File - Click Browse to search for the sample data file.

Data Load

If you select to Copy - Paste as Load Type, a Data Load grid will appear. Copy data from your file and paste it into the provided grid. Paste data into the grid by selecting the whole grid (click on the cell in the upper left-hand corner).

Note:
You can copy and paste up to 10,000 rows (which is the default). The system automatically deletes empty rows, which will not be displayed on the Manipulate Input File page.

Define Overall Rule Settings

Overview

Select parameters, defaults, and segment information to further define the way you want to load data. Different fields are available based on the Load Type. Documentation is grouped based on load types.

Description of Fields for Load Item - Segment Hierarchies 

Hierarchy Type: Select Main Hierarchy unless you are loading data to an Alternate Hierarchy.

Member

  • Allow Move: Select to allow parent changes for existing members.
  • Allow Property Change: Select to allow Property Changes (i.e. name, currency, or other system properties) for existing members.
Note:
If you do not select the Allow Property Change check box, two parent entries will be created with the same code and two names (one of which will be empty). The default behavior for nodes is: Parent nodes - uniqueness is maintained at the Code-Name level. Leaf nodes uniqueness is maintained at the Code level.
Note:
The Allow Property Change check box is enabled for the NetSuite Connect DLR. This functionality is applicable only for the Parent Child Segment Hierarchies load.

Build

  • Level: Build a hierarchical level.
    • Select Number of Levels: Available if you select the Level option. Select the number of levels you want your hierarchy to consist of.
  • Parent Child: Build a parent child based hierarchy.

Parent Child Versus Level Based Hierarchies

A Parent Child hierarchy is one where you have a child field and a parent field. For example, let's say you have a table with employee department details and associated sub-ordinates.

ChildParent

Matt

?

Brian

Matt

Jim

Matt

Katy

Brian

Matt is the ultimate boss of the department. Matt has 2 sub-ordinates: Brian and Jim. Brian's sub-ordinate is Katy.

A Level Based hierarchy has the following structure.

Level 1Level 2Level 3

Matt

Brain

Katy

MattBrianKaty

Ragged hierarchies is supported (hierarchies with varying levels of tree depth) while loading Level based segment hierarchies using a Data Load Rule.

Note:
The intermediate Parent Code and Parent Name in a hierarchy cannot be empty.
Note:
Level Based Hierarchy load does not support the loading of roll up level without any leaf level members.

Parameter

The parameters listed are those available to be included in the data file. If an elimination company was set up in Hierarchy Management, you may have a default Elim Attribute parameter (as shown below). Select Yes for Include in Data File for the Elim Attribute to obtain the value from the data you are loading to Planful. This Parameter is available for Segment Hierarchy and Alternate Hierarchy data loads.

Include in Clear Data

Select segments to clear or retain data for. For example, if you load data for Company A and then want to use the same data load rule to load data for Company B, the data loaded for Company A will not be deleted if you do not choose to clear the data.

Include in Data File

Select Yes if a dimension is not defined within the data file and there is a common member for all records.

Select No if you do not want the listed parameters in the data file.

Select

Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Description of Fields for Load Item - Data

Load Sub Item - GL Data

Parameters

  • Reporting-Select Common Currency (CC) if you are loading legal entity data in a common currency. Otherwise, select Local Currency (LC). .
  • Credit/Debit – Select No if account balances are in their natural trial balance format. If not, select Yes. The calculation for Debit and Credit accounts works as follows when "Credit/Debit" is set to 'Yes' while loading data:
    • For Debit Accounts - Amounts loaded for a debit account will be displayed "as is" in reports. For example, if you want to see a negative number for a debit account, load a negative number for that account while uploading data into the system.
    • For Credit Accounts - By default, the number being loaded is negated and stored in to the system. So, if an amount of 100 is loaded, the number is stored internally as -100. The 'Verify Data Report' will show -100, as (100 X -1) while storing the data.
    • For a Credit account, by default, the number being loaded would be negated and stored in to the system. So, if an amount of 100 is loaded, the number would be stored internally as -100 and hence the 'Verify Data Report' would show -100, as (100 X 1) is done while storing the data.
  • Data Type – Select YTD format for balance sheet and income statement data loads. If you select MTD, there must be an opening balance for data. Select Account Based to format the balance sheet based on an account.
  • Calculate Trial Balance – Select Yes to calculate the trail balance. Credits and Debits must be balanced in order for the data to load. Remember that a Trial Balance is a list of all general ledger accounts contained in an organization's ledger. When the trial balance option is available, the data load file is rejected if the validation check for the trial balance fails.
  • Currency Conversion – Select Yes to translate the data from local currency to common currency. Select No to indicate that a translation does not need to be processed.
  • Scenario Type - Select a Scenario Type to display only the associated scenarios in the Scenario field drop-down. 
  • Scenario – Select the default scenario to load data. If you select a Budget Scenario, Forecast Scenario or a Preload Scenario for a GL or Translation and select Month as the Frequency, the Across Columns by Scenario Time Mapping value is available.
  • Frequency – Select Monthly for all Financials. For Scorecards you may select Day, Quarter, or Year.
  • Time Mapping – Select to indicate if time periods are defined across the columns or whether one period column is defined with the values stored in the rows. 
    • If you select Across Columns by Scenario, the Segments (subject to default settings), Fiscal Months and Opening Balances are available in data mappings. Fiscal Year is not available. In the DLR default settings, Fiscal Year and Fiscal Month are not available. 
    • Time Mapping Across Columns by Scenario is available for Sales Price and Sales Data if a budget scenario is selected while loading Sales Price/Sales Data. Fiscal Year and Fiscal Month fields will be disabled in Default Settings. Fiscal Year will be disabled in Data Mappings.
  • Automatic Cube Refresh - Select to automatically have the system refresh the reporting cube. The cube must be processed after the load is complete to view translated data in reports.
  • Process Employees - Select this checkbox to process employees using DLR. If this check box is selected and data is loaded successfully using DLR, then a request to process all the employees will be submitted. The default value is No. For the existing DLRs, this check box will not be selected.
Note:
The request to process all the employees will be submitted only if the Enable Delta Processing is enabled in the Define Workforce Planning Budgeting Criteria page of Configuration Tasks.

The check box is available for the following DLRs:

  • Workforce - Employee Data
  • Workforce - Employee Compensation Item
  • Workforce - Employee Allocations

Default Value for Missing Periods

Define data calculations for missing periods at DLR level with GL Data and Translations sub type.

  • Enable new Data Calculations —Select this checkbox to enable data calculations for missing periods (selected by default).
  • Overwrite Global Settings —Select this checkbox to overwrite values for flow accounts and balance accounts.
  • Flow Accounts —Select YTD or MTD flow account to overwrite.
  • Balance Accounts —Select YTD or MTD balance accounts to overwrite.

Email Recipients

By default, the e-mail recipients added to the Cloud Services Configuration Task are carried forward to the Cloud Services Data Load Rule, but you have the flexibility to edit the recipients specific to each Cloud Services Data Load Rule created. You can edit email recipients here.

Clear Data Configuration - Include in Clear Data for GL and Translations Data

For Clear Data Combination, Scenarios are never in the file. They are defined by the DLR settings on the same page. So, the file has nothing to do with which Scenario is updated/cleared. If you don’t select any checkboxes, the system will clear all data for the Time periods and Scenarios found in the load file. If you select a box, for example Company, the system will clear all data for the Time periods and Scenario but only for the companies found in the load file. The more boxes you select, the more granular/precise the clear based upon the file being loaded.

Clear Data Combination functionality can be used in two ways:

  1. Uncheck all dimensions except the dimension that drives your currency (legal entity like Company)- to clear all data loaded to date for legal entities and periods in the file and replace with the new file.
  2. Check all dimensions - to update only the GL combinations present in the file and not clear any other combinations previously loaded using this rule.

If you are unsure or confused on how to use this functionality, you should start with all of the boxes unchecked. This will ensure that every subsequent file load will clear all data previously loaded to that scenario for the fiscal period(s) in the current load file.

The most common situation where you might need to check a box is when you are loading multiple files using the same DLR to the same fiscal period. Here are two examples:

  • If you are loading multiple Companies using separate files, you would check the Company box. This would ensure that ONLY the data for the company in the next load file would be cleared. Other companies loaded prior to this file would remain unchanged. 
  • Sometimes something similar happens with the Account dimension. For example, you might have 1 file that contains trial balance information and another file that contains data for statistical accounts. The files might have the same layout and use the same DLR. In this case, you would check the Account dimension only so that loading the statistical amounts after the trial balance amounts will NOT clear the trial balance amounts.

Clear data combination allows you to clear existing data prior to loading new data based on selected segment combinations.

  • Segment - Select segments to update the records in the database for the named data load rule. The Legal Entity dimension that affects the currency in the application will always be grayed out.
  • Include in Clear Data- Select this checkbox to clear all records and reload what is in the file for all periods included in the data load rule.

When All Segments are checked, the system will overwrite only the associated ProfitCenter_CostCenter and Account combinations in the file.

When Account is not checked, the system will overwrite data loaded previously using the Data Load Rule (DLR) at the ProfitCenter_CostCenter level. For example, if you are loading data to the file for the ProfitCenter_CostCenter member “0001” only, all account data previously loaded using the DLR for “0001” will be deleted for the period and reloaded based on the file.

1. Loading Data with all Segments Checked in Clear Data Combination

If you enable Clear all data prior to new data load when the transactional data is loaded via Data Load Rules (see image below), the system will clear all the data for Actuals and the distinct time periods available in the new data load file. Below is a sample of the original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBuisnessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22

2010

Default

1610

Default

Default

Default

Default

Default

2022

161

161

161

161

2010

Default

3320

Default

Default

Default

Default

Default

2022

332

332

332

332

2100

Default

3320

Default

Default

Default

Default

Default

2022

444

444

444

444

New data is then loaded using a data load rule, with all segments checked in the Clear Data Combination option:

Data Loaded

CompanyDeptAcctProductCostCenterBuisnessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22

2100

Default

3320

Default

Default

Default

Default

Default

2022

555

555

555

555

2120

Default

3320

Default

Default

Default

Default

Default

2022

212

212

212

212

2140

Default

3320

Default

Default

Default

Default

Default

2022

214

214

214

214

The resulting data contains the previously loaded data along with the newly loaded data, with any updates:

Resulting Data, with all Segments Checked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
16102010DefaultDefaultDefaultDefaultDefaultDefault161161161161
33202010DefaultDefaultDefaultDefaultDefaultDefault332332332332
33202100DefaultDefaultDefaultDefaultDefaultDefault444444444444
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214

Conversely, when all Segments in Clear Data Combination are unchecked, data that was previously loaded by this data load rule and data for the time period included in the data load rule is cleared, and the new data is loaded.

2. Loading Data with all Segments Unchecked in Clear Data Combination

If you disable Clear all data prior to new data load, the system will clear pre-existing transactional data with the following conditions:

  • Scenario will be Actuals.
  • The distinct time periods available in the new data load file.
  • The segment combinations available in the new data load file.

Below is a sample of original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

New data is then loaded using a data load rule, having all segments unchecked in the Clear Data Combination option:

Data Loaded

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2100Default3320DefaultDefaultDefaultDefaultDefault2022555555555555
2120Default3320DefaultDefaultDefaultDefaultDefault2022212212212212
2140Default3320DefaultDefaultDefaultDefaultDefault2022214214214214

The resulting data clears data that was previously loaded by this data rule and for the time period included in the data load rule, and loads the new data:

Resulting Data, with all Segments Unchecked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
33202100DefaultDefaultDefaultDefaultDefaultDefault555555555555
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214
Note:
The Default Clear Data Combination is to have all segments checked. In this case, no existing data is cleared by the load rule. New data is loaded from the data load rule and the existing data remains. For data that is loaded to existing intersections, the new data updates the existing data.

3. Loading Data with Only One Segment Checked in Clear Data Combination

Below is a sample of original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

New data is then loaded using a data load rule, having only one segment (Company) checked in the Clear Data Combination option:

Data Loaded 

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

The resulting data contains the previously loaded data along with the newly loaded data, with any updates:

Resulting Data, with only one Segment (Company) Checked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
16102010DefaultDefaultDefaultDefaultDefaultDefault161161161161
33202010DefaultDefaultDefaultDefaultDefaultDefault332332332332
33202100DefaultDefaultDefaultDefaultDefaultDefault555555555555
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214

4. Loading Data with more than One Segment Checked in Clear Data Combination

Below is a sample of original data that had been previously loaded with a data load rule for 2022:

Original Data

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2010Default1610DefaultDefaultDefaultDefaultDefault2022161161161161
2010Default3320DefaultDefaultDefaultDefaultDefault2022332332332332
2100Default3320DefaultDefaultDefaultDefaultDefault2022444444444444

New data is then loaded using a data load rule, having two segments (Company & Accounts) checked in the Clear Data Combination option:

Data Loaded 

CompanyDepartmentAccountProductLineCostCenterBusinessCenterGeographyICSegmentFiscal YearJan-22Feb-22Mar-22Apr-22
2100Default3320DefaultDefaultDefaultDefaultDefault2022555555555555
2120Default3320DefaultDefaultDefaultDefaultDefault2022212212212212
2140Default3320DefaultDefaultDefaultDefaultDefault2022214214214214

The resulting data contains the previously loaded data along with the newly loaded data, with any updates:

Resulting Data, with only one Segment (Company) Checked in Clear Data Combination

AccountCompanyDepartmentCostCenterBuisnessCenterProductLineGeographyICSegmentJan-22Feb-22Mar-22Apr-22
16102010DefaultDefaultDefaultDefaultDefaultDefault161161161161
33202010DefaultDefaultDefaultDefaultDefaultDefault332332332332
33202100DefaultDefaultDefaultDefaultDefaultDefault555555555555
33202120DefaultDefaultDefaultDefaultDefaultDefault212212212212
33202140DefaultDefaultDefaultDefaultDefaultDefault214214214214

Load Sub Item - Transaction Data

Parameters

The parameters listed are those available to be included in the data file.

Clear Data Combination - Clear all data prior to new data load for Transaction Data

If the 'Clear all data prior to new data load' box is unchecked, the Transaction Data Load will clear transactions only for the exact GL combination strings contained in the load file. 

The system does not allow duplicate transaction data for a GL combination. If you want to update a specific transaction, uncheck the box and upload all transaction data for that specific GL combination.


Transaction Details

Transaction details can be setup on the Transaction Details Setup page.

Attribute type: The Common COA (Chart of Accounts) segment defaults are displayed based on the setup on the Define Financial Segments page

Attribute name: The name of the attribute based on the segments defined for the Common COA and attributes setup for those segments. For example, some Common COA names are; Company, Account, and Project.

Attribute data type

  • varchar: Associates a data type with a variable character field. Variable character fields have a user-defined length and can hold letters and numbers.
  • URL: Load a reference URL. You can click the URL hyperlink when you view the Transactions Drill-Through reports.

Attribute length: The maximum amount of characters for the field.

Include in Data File

  • Select Yes if a dimension is not defined within the data file and there is a common member for all records.
  • Select No if you do not want the listed parameters in the data file.

Default/Select: Available if you select No for a parameter. Search for other dimensions to associate with the data file.

1. Loading Data with Checked Clear all data prior to new data load Checkbox

Below is a sample of original data that had been previously loaded with a data load rule:

Original Data

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL
1943-Balance-YTD_v32091LC5351009210091PROD 011100-9011103315-01-202215001P007126150http://www.yahoo.com
1943-Balance-YTD_v32091LC

5351009210091PROD 011100-9011103316-01-202216001P007127160www.gmail.com
1943-Balance-YTD_v32091LC

5351009210091PROD 011100-9011103317-01-202216301P007128170http://www.google.com
1943-Balance-YTD_v32091LC

5351009210091PROD 011100-9011103318-01-202216401P007129180http://www.yahoo4.com
1943-Balance-YTD_v32091LC531009210091PROD 011100-9011103319-01-202216402P007130190http://www.yahoo4.com

New data is then loaded using a data load rule, having two segments (Company & Accounts) checked in the Clear Data Combination option:

Data Loaded 

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL

4018-Flow-MTD_v3

2091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.google.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103318-01-202225004P007129180http://www.yahoo4.com
4018-Flow-MTD_v32091LC531009210091PROD 011100-9011103319-01-202225005P007130190http://www.yahoo4.com

The resulting data clears data that was previously (Account 1: 1943-Balance-YTD_v3) Account loaded by this rule and replaced with the below 5 (Account 2: 4018-Flow-MTD_v3) lines loaded.

Resulting Data

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL

4018-Flow-MTD_v3

2091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.google.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103318-01-202225004P007129180http://www.yahoo4.com
4018-Flow-MTD_v32091LC531009210091PROD 011100-9011103319-01-202225005P007130190http://www.yahoo4.com

2. Loading Data with Unchecked Clear all data prior to new data load Checkbox

Below is a sample of original data that had been previously loaded with a data load rule:

Original Data

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL
1943-Balance-YTD_v32091LC5351009210091PROD 011100-9011103315-01-202215001P007126150http://www.yahoo.com
1943-Balance-YTD_v32091LC

5351009210091PROD 011100-9011103316-01-202216001P007127160www.gmail.com
1943-Balance-YTD_v32091LC

5351009210091PROD 011100-9011103317-01-202216301P007128170http://www.google.com
1943-Balance-YTD_v32091LC

5351009210091PROD 011100-9011103318-01-202216401P007129180http://www.yahoo4.com

New data is then loaded using a data load rule, having Unchecked Clear all data prior to new data load Checkbox.

Data Loaded 

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL

4018-Flow-MTD_v3

2091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.google.com

The resulting data will add newly loaded 3 lines of data for the Account - ‘4018-Flow-MTD_v3’ to the existing data.

AccountCompanyDepartmentCostCenterBusinessCenterProductLineGeographyICSegmentDate/TimeDebit AmountProduct CodeNet gmsURL
1943-Balance-YTD_v32091LC5351009210091PROD 011100-9011103315-01-202215001P007126150http://www.yahoo.com
1943-Balance-YTD_v32091LC5351009210091PROD 011100-9011103316-01-202215001P007127160www.gmail.com
1943-Balance-YTD_v32091LC5351009210091PROD 011100-9011103317-01-202215001P007128170http://www.google.com
1943-Balance-YTD_v32091LC5351009210091PROD 011100-9011103318-01-202215001P007129180http://www.yahoo4.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103315-01-202225001P007126150http://www.yahoo.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103316-01-202225002P007127160www.gmail.com
4018-Flow-MTD_v32091LC5351009210091PROD 011100-9011103317-01-202225003P007128170http://www.yahoo4.com

Description of Fields for Load Item - Attributes and Attribute Hierarchies 

Important! When loading attributes, the leaf name AND code must be included.

Member

  • Allow Move - Select this checkbox while loading segment hierarchies to allow Parent Changes for existing members.
  • Allow Property Change - Select this checkbox while loading segment hierarchies to allow Property Changes (i.e. name, currency, or other system properties) for existing members.
Note:
If you do not select the Allow Property Change checkbox, two parent entries will be created with the same code and two names (one of which will be empty). The default behavior for nodes is: Parent nodes - uniqueness is maintained at the Code-Name level. Leaf nodes - uniqueness is maintained at the Code level.

Build

  • Level- Select to build a hierarchical level.
    • Max Number of Levels - This option is available if you select the Level option. Select the number of levels you want you hierarchy to consist of.
  • Parent Child - Select to build a parent child based hierarchy.

Description of Fields for Load Item - Entity Hierarchies

Member

  • Allow Move - Select this checkbox while loading segment hierarchies to allow Parent Changes for existing members.
  • Allow Property Change - Select this checkbox while loading segment hierarchies to allow Property Changes (i.e. name, currency, or other system properties) for existing members.
Note:
If you do not select the Allow Property Change checkbox, two parent entries will be created with the same code and two names (one of which will be empty). The default behavior for nodes is: Parent nodes - uniqueness is maintained at the Code-Name level. Leaf nodes uniqueness is maintained at the Code level.

Build

  • Level- Select to build a hierarchical level.
    • Max Number of Levels - This option is available if you select the Level option. Select the number of levels you want you hierarchy to consist of.
  • Parent Child - Select to build a parent child based hierarchy.

Parameters

  • Entity Type: Entity types may be based on attributes or segments.

Defaults

  • Parameter: The parameters listed are those available to be included in the data file.
  • Include in Data File: 
    • Select Yes if a dimension is not defined within the data file and there is a common member for all records.
    • Select No if you do not want the listed parameters in the data file.
  • Select: Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Description of Fields for Load Item - User Load

Deafults

  • Parameter: The parameters listed are those available to be included in the data file.
  • Include in Data File:
    • Select Yes to include role in data file.
    • Select No if you do not want the role in the data file.
  • Select: Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Description of Fields for Load Item - Workforce Planning

Load Columns

Click to open the Load Columns dialog page to select additional columns you want to load.

  • Click a row.
  • Click the Save button.
  • The column will be added to the Data Load Column list.
Note:
Hold down the control key on your keyboard and select multiple columns at once.

In addition to the current default data load columns populated, the following required columns are displayed: 

  • Budget Entity Code
  • Budget Entity Segments (COA segments form the Budget Entity)
  • Rate per Hour
  • Work Hours - Basis (Annual / By Month)
  • Salary - Basis (Annual / By Month - the wage Type compensation item)

For Review percentages, the ‘Multiple Reviews’ column is available in the Load columns selector. If selected, the columns related to the effective dates and percentages become available.

Scenario selection in Workforce – Employee Data Load Rule (DLR)

You can retain the Data Load Columns when you change the Scenario in the Define Overall Rule Settings tab for a Workforce – Employee Data DLR.

For example, when you select Scenario 2 after selecting Scenario 1 from the Scenario drop-down list, the Scenario Change pop-up window displays a confirmation message to retain the columns and if you click:

  • Yes: The columns from Scenario 1 will be retained for Scenario 2.
  • No: The columns from Scenario 1 will not be retained for Scenario 2. If there are any columns saved against the DLR for Scenario 2 then those columns will be displayed along with the default columns.

To save the column settings for a scenario, click Next.

Note:
When you load columns in Scenario 1 and then select Scenario 2, a confirmation message is displayed to retain the columns. If you click Yes, the columns from Scenario 1 are retained for Scenario 2 even if you do not save the columns for Scenario 1.

The image below displays the Scenario Change pop-up window when you select Budget 2017 scenario after selecting Budget 2013 scenario.

Paramaters

  • Scenario: Select the scenario you want to load employee data directly to from the list box. The scenarios populated in the list box include those created using the Scenario Setup page.
  • Note:
    For forecast scenarios, while loading allocations with Across columns data load format, it is not possible to select only the open periods for the load for data mappings. We recommend that you use the Down Rows and Across columns by Scenario options to load the allocations to the forecast scenarios if there are any closed periods.

If annually varying compensation items are created in the scenario, relevant columns become visible for mapping automatically. They are otherwise hidden.

  • Payroll upload Date: Select the date for which you want to upload payroll data.

Setup columns and order

  • Data Load Column (Type): Contains a list of columns that will be displayed (by Display Name) in the resulting load. Change the order in which the columns are displayed using the arrows to move the columns up and down the list. There are 5 system delivered/defined columns, which are:
    1. Employee Number
    2. Hire Date
    3. Pay Plan Code
    4. Employee Name
    5. Employee Type Code
Note:
System delivered/defined columns can not be deleted.
  • Display Name: The name of the Data Load Column. 
  • Basis: The basis is used to calculate compensation amounts. This field is available for selection for compensation items.
    • Annual - Compensations amounts will be calculated on an annual basis.
    • By Month - Compensations amounts will be calculated on a monthly basis.
  • Default Value: This field is available for Position Name, Employee Name and Pay Plan Code. These three columns are system-defined and delivered and may not be deleted. Enter a default value to populate the column in the resulting load.

Time Mapping

  • Down Rows - Selecting this option indicates that time periods are defined down the rows.
  • Across Columns - Selecting this option indicates that time periods are defined across the columns.

Parameters for loading Wage Type Compensations

You can load Wage Type compensations (i.e. Salary) for compensation items with a Wage Type or Wage Type without Reviews basis.

The following options are available for Salary and Wage Hours parameters:

  • Annual Salary - Annual input value.
  • Monthly Salary - Monthly input value, which is calculated by multiplying this number times the number of fiscal periods.
  • Annual Wage Hours - Annual input value, which is calculated by dividing this number by the number of fiscal periods.
  • Monthly Wage Hours - By period input value.
Note:
Rate, Hour and Default Value columns cannot be blank. Load a ‘0’ (Zero) value when a field is not applicable for the compensation.

Description of Fields for Load Item - Currency Exchange Rates

Parameters

  • Scenario: Select the Scenario for which you want to load exchange rates. Scenarios represent versions of a budget or forecast for a fiscal year and are created on the Scenario Setuppage.
    • If you select a Budget type scenario, the Time Mapping Across Columns by Scenario option is available.
  • Apply Currency Conversion: Automatically converts local currencies to a common currency. 
  • Time Mapping
    • Across Columns by Scenario is available for Currency Exchange Rates if a budget scenario is selected while loading Exchange Rates.
    • Fiscal Year and Fiscal Month will be disabled in Default Settings. Fiscal Year will be disabled in Data Mappings.
      • Down Rows - Selecting this option indicates that time periods are defined down the rows.
      • Across Columns - Selecting this option indicates that time periods are defined across the columns.

Defaults

  • Parameter: The parameters listed are those available to be included in the data file.
  • Include in Data File: 
    • Select Yes if a dimension is not defined within the data file and there is a common member for all records.
    • Select No if you do not want the listed parameters in the data file.
  • Select: Available if you select No for a parameter. Search for other dimensions to associate with the data file.

Selection Translation(s)

  • Select currency translations to change the value of the source currency into another (foreign) currency target value.

Manipulate Input File

Manipulate the file or format of the data from which you are loading data. The top pane displays the raw data, and the bottom pane displays the manipulated data. Manipulation options include:

  • Join - merges multiple columns into a new column.
  • Split - divides a column into multiple columns.
  • Move - changes the display order of the columns.
  • Sort - categorizes or sorts the data based on a specific column.
  • Replace - finds and replaces for the selected column.
  • Create - creates a new column with a user-defined value.
  • Headers - allows the creation of column headings for the data.
  • Manipulations - provides a view of all the manipulations applied to the data.

Description of Fields for the Manipulate Input File Page

Create: Create a new column on the Create page.

  • Column Name - Enter the name of the new column.
  • Column Value - Enter the row value for the column.

Copy: Select the column you want to copy and click Copy to open the Copy page where you will create two new columns.

Description of fields on the Copy dialog page:

Source Column - Select the column you want to copy and click Copy to create two new columns. When you copy a column, the Source Column contains information from the column you copied. Enter the name of the new column in which you will copy the contents to.

Note:
All manipulations are available for newly copied columns.
Note:
Target Column - The Target Column contains information for the column you copy to.
Note:
Click Add New to add the new column as the first column in the Manipulations grid.

Grid Actions

Sort: Sort the columns.

Manipulations

Join: Join two columns into one column by providing a Join Column Name and Join Operator.

  • Join Column Name - Select the columns you want to join by pressing the Shift key on your keyboard while selecting the columns. Enter the new name of the column you are creating with the join. For example, a Join Column Name might be COL3 for joined columns 1 (COL1) and 2 (COL2).
  • Join Operator:
    • None - No join operator.
    • Underscore - Add an underscore. For example, if you add column 6 containing X with column 7 containing Y, the joined column 8 will contain X_Y.
    • Space - Add a space between the contents of the two joined columns.
    • Dot - Add a dot between the contents of the two joined columns.
    • Hyphen - Add a hyphen between the contents of the two joined columns.
  • Add New - Add the newly joined column to the spreadsheet on the Manipulate Input File page.

Split: Split one column into two columns by providing Split at, Split Column Name 1, and Split Column Name 2 information.

  • Split at - Select the column you want to split. Enter the point where the split will occur within the column.
  • Split Column Name 1 - The name of the first column you are creating by performing the split.
  • Split Column Name 2 - Enter the name of the second column you are creating by performing the split.
  • Add New - Split the column and display the new columns on the spreadsheet on the Manipulate Input File page.

Move: Move the placing of the column up, down, to the top, or to the bottom of the grid. 

Replace: Replace a selected column name on the Replace page.

  • Replace/With - Enter the information you want to replace (in the Replace field) with the information you are replacing it with (in the With
  • field).
  • Replace Entire Sheet - Replace the entire grid of columns. If this checkbox is not available, the replace will only happen for a selected column. By default, Replace Entire Sheet is deselected.
  • Match Entire Cell Content - Replace partial text in a cell. Select this checkbox to replace only those cells where the cell contents exactly match with the content provided in the Replace (full text replacement). If you don’t select this checkbox, all cells where the text in content is used in partial or full will be replaced. By default, Match Entire Cell Content is deselected.
Note:
You can enter special characters (i.e. / . > , < ; : “ ) when using the Replace option.

Data Headers: Edit or create the heading names of the columns on the Headers dialog page. Manipulated Columns provides information on the name of the columns. Header Columns provides the name of the headers, which can be edited.

Data Manipulations: View data manipulations on the Manipulations page.

  • Order - Reverse the order of the columns.
  • Operation - The operations defined (i.e. Split, Join).
  • Join Operator - The join operators created when you performed the Join (underscore, dot, hyphen, or space).
  • Source Columns - The columns copied from when you performed the Copy.
  • Target Columns - The columns copied to when you performed the Copy.

Grid Action

Turn the row and column headers on or off.

Auto-Size Column Width - Automatically adjust the columns in the Raw Data and Manipulations Data grids to fit the contents.

The columns in the Manipulations grid may also be resized by dragging them to the right as you would in an Excel spreadsheet. The resized column settings are saved when you click Next or Finish.

Define Data Mappings

Map Planful source data columns (Source Column) to target segments (Maps To). This mapping tells the system where and how your data will display in the columns once the load takes place.

You can map up to 100 columns. For example, if the number of Budget Input Years is 5, you may map as many as 60 columns. However, it is not necessary to map all the months for a scenario. You must map at least one month to load data.

Description of Fields for the Define Data Mappings Page

Source Columns/Map To

View Source Columns and select a Maps To option.

  • If Across Columns was selected for Time Mapping on the Define Overall Rule Setting page, monthly values will be provided in columns when the data is loaded through File, Copy Paste or Web Services. There are multiple measures that can be loaded while loading sales data.
  • If the Across Columns option was selected instead of individual Measures, the Measure Code is available in the Data Mappings in addition to the other columns. Measure Code is required in the source file.
  • Support Role (if not selected on the Define Overall Rule Settings page to be included in the data file) is added to the Data Mappings as a mandatory field.

Load Data

This is the final step in the Data Load Process. If there where issues with the load, an error log is created. If the Data Load was successful, a message indicating so will appear. It's important to load data in chronological order to avoid discrepancies and multiple loads.

Note:
When you load data values with numerous decimal places, the Planful application automatically rounds each value to 6 decimal places. To load an Excel file, save the file as a txt (tab delimited) type and then load.

Once data is loaded, a status page displays the target scenario and the load status with any exceptions. If data does not load properly, click the Exception Report button to view the Data Load Exception Report.

Note:
Compensation and Allocation data loads for a Forecast Scenario are applicable for open periods only based on the scenario definition.

Processing of Data Load Rules - Scenario Locking

Available for Shared Mode Applications

This applies to only those applications where Shared Mode is enabled. For information on what Shared Mode is, click here. If you are not sure if your application has Shared Mode, please contact Planful Support.

Overview

To prevent Data Load Rules and Consolidation Processes from running on the same scenario simultaneously, Scenario Lock functionality is in place.

When Data Load Rules and Consolidation Processes are executed, the first one to be initiated will acquire a lock at the Scenario level preventing the second process from executing on the same scenario until the first process is complete. For example, let's say a second process is initiated before the first one completes, the system will wait 10 minutes before canceling the second process. Note that information on status (canceled, completed and queued processes) is available in Cloud Scheduler. During the 10 minutes, the second process will automatically attempt to execute every 30 seconds to acquire a hold on the scenario. If the scenario lock is acquired by the second process (meaning that the first process completes within the 10 minutes) then it will execute and complete. If the scenario is not acquired by the second process within 10 minutes (meaning that the first process takes longer to complete), then it will be canceled and you can retry the process later.

Application Behavior

You can run Data Load Rules synchronously and/or asynchronously.

Behavior When Running Asynchronous Data Load Rules

  • Asynchronous Data Load Rules run in cloud and are executed via Cloud Scheduler.
  • Asynchronous Data Load Rules have a limit of one process per application. So when two Data Load Rules are executed by the same or different users (on the same or two different scenarios), they are queued in the Cloud Scheduler and executed one after another.

Behavior When Running Synchronous Data Load Rules

  • Synchronous Data Load Rules run instantaneously and you cannot continue to work in the application until completion.
  • You will not see entries in the Cloud scheduler. You will see entries in the Data Load History.
  • Two different users can run a Data Load Rule independently at the same time as long as each is run for a different scenario. If the same scenario, subsequent process will fail and result in an error.

Behavior When Running a Combination of Synchronous and Asynchronous Data Load Rules

  • You can run Data Load Rules asynchronously and synchronously within the same application. If an asynchronous is running and a synchronous Data Load Rule is initiated for a different scenario, both will be executed.
  • If an asynchronous Data Load Rule is running and a synchronous Data Load Rule is initiated for the same scenario, the synchronous Data Load Rule will fail immediately.

If a synchronous Data Load Rule is running and an asynchronous Data Load Rule is initiated for the same scenario, the asynchronous Data Load Rule will be queued in the Cloud Scheduler (the Status will be ‘running’ while waiting). The wait time is ten minutes, after which the application cancels the queued process.


Was this article helpful?