Snowflake Best Practices
  • 2 Minutes to read
  • Dark
    Light
  • PDF

Snowflake Best Practices

  • Dark
    Light
  • PDF

Article summary

Managing and Validating SQL Queries:

  • Testing Queries: It’s best to test your SQL outside Planful first. Use Snowflake’s worksheet/web UI or any SQL editor to run the query and inspect results. Verify that the data returned looks correct (e.g., correct date range, no duplicates, expected number of records). This helps catch issues early (like a filter that’s excluding too much or not enough).

  • Performance Considerations: GL tables can be large (millions of rows). Use filters (WHERE clauses) to limit data whenever possible. Also, select only the columns you truly need (for example, if Planful only needs the amount and account, there is no need to select every column). This reduces the data volume, speeding up the transfer. Snowflake’s warehouses can handle large queries, but unneeded large result sets will slow down the import.

  • Use of Views or Stored Logic: If your GL data requires joins or calculations (for example, joining account codes to descriptions or converting currencies), do the following:

    • One convenient approach is to create a materialized view in Snowflake that encapsulates this logic. For example, a view v_gl_transactions_filtered joins the chart of accounts and filters out specific system entries. You can then SELECT * FROM v_gl_transactions_filtered in Planful.

    • The benefit is twofold:

      • Snowflake can optimize the view’s query, and

      • If you need to change the logic, you can update the view in Snowflake (which is often version-controlled via SQL scripts or a tool like dbt) rather than modifying the query in the Planful UI. This provides better change management for complex logic.

  • Version Control and Documentation: Treat the SQL query powering your Planful import as a piece of source code for your data pipeline.

    • Document what each query does (e.g., This query pulls the last 2 months of GL transactions, excluding inter-company entries). This helps future maintainers and auditors understand the data lineage.

  • Validation: After Planful imports the data, do a quick reconciliation against Snowflake. For instance, if you imported a sum of amounts by month, cross-check that the total matches between Planful and a Snowflake query. This ensures the data was transferred correctly and completely.

SQL Query Management in Planful:

Planful provides an interface to input the SQL. When saving the import configuration, Planful validates the SQL (checking syntax or trying a small preview). If there’s an error (syntax error, or a privilege issue), Planful displays it.

Common mistakes include missing a semicolon (if required) or not fully qualifying table names (it’s best to use database.schema.table in cross-database queries since Planful’s session might not have a context set, unless you set the default database/schema in the connection).

In summary, writing SQL for GL data is about selecting the necessary data and filtering out the rest. Keep queries as simple as possible, test them thoroughly, and leverage Snowflake’s power (through views or aggregations) to lighten the load on Planful. By managing these queries carefully and possibly version-controlling them, you ensure that the data import process is reliable and transparent.


Was this article helpful?