Snowflake Dynamic Tables: When to Use Them and When to Stick with Streams/Tasks

Quick answer: Dynamic Tables are Snowflake's declarative approach to incremental pipelines - you write a SELECT, set a target_lag, and Snowflake handles the refresh. They're great for simple transformations like star schema refreshes. Stick with Streams/Tasks when you need multi-step logic, MERGE operations, external API calls, or JavaScript UDFs.

Last updated: January 2026

Snowflake Dynamic Tables automatically refresh materialized query results on a schedule you define — replacing manual streams + tasks pipelines with a single CREATE DYNAMIC TABLE statement. They’re best for transformation layers (staging → marts) where data freshness of 1-60 minutes is acceptable. Use streams + tasks instead when you need sub-minute latency or complex multi-step CDC logic. Here’s the comparison with implementation examples. to track changes, a Task to schedule processing, and a stored procedure to run the actual SQL, you just write a CREATE DYNAMIC TABLE statement with a SELECT query and a freshness target. Snowflake figures out the rest.


That's the theory. In practice, there are real tradeoffs between Dynamic Tables and the Streams/Tasks pattern that determine which one you should use for a given pipeline. This guide walks through both approaches with enough detail that you can make that call for your own workloads.


What Dynamic Tables Actually Do


A Dynamic Table is a materialized result set that Snowflake keeps up to date automatically. You define it with a SQL SELECT statement and a target_lag parameter that tells Snowflake how stale the data is allowed to get. Snowflake's internal scheduler then figures out how often to refresh the table to meet that target.


Here's the key mental model: a Dynamic Table is like a materialized view that you can control the refresh frequency of. Except unlike materialized views in Snowflake (which are best for simple queries on a single table), Dynamic Tables support joins, aggregations, window functions, and CTEs.


Here's a real example. Say you have a raw orders table and a raw customers table, and you want a denormalized order_summary that joins them and adds some computed columns:


SQL - Dynamic Table definition
CREATE OR REPLACE DYNAMIC TABLE order_summary
  TARGET_LAG = '10 minutes'
  WAREHOUSE = transform_wh
AS
SELECT
    o.order_id,
    o.order_date,
    o.total_amount,
    c.customer_name,
    c.segment,
    o.total_amount - COALESCE(o.discount, 0) AS net_amount,
    DATEDIFF('day', c.first_order_date, o.order_date) AS customer_tenure_days
FROM raw.orders o
JOIN raw.customers c ON o.customer_id = c.customer_id
WHERE o.status != 'cancelled';

That's it. No Stream, no Task, no stored procedure, no scheduling config. Snowflake will keep order_summary within 10 minutes of the source tables. When new rows land in raw.orders or raw.customers, Snowflake detects the change and refreshes the Dynamic Table on the transform_wh warehouse.


Understanding target_lag


The target_lag parameter is the most important configuration on a Dynamic Table. It doesn't set a fixed schedule - it sets a freshness guarantee. Snowflake's scheduler decides when and how often to refresh, based on how frequently the source data changes and how long each refresh takes.


You can set it as a time interval or use the special keyword DOWNSTREAM:



A gotcha: setting target_lag = '1 minute' doesn't mean the table refreshes every minute. If a refresh takes 3 minutes to complete, Snowflake will start the next one immediately after the previous finishes, but you'll see actual lag of 3+ minutes. The target is a goal, not a hard guarantee. Monitor actual lag with:


SQL - Check refresh history
SELECT *
FROM TABLE(INFORMATION_SCHEMA.DYNAMIC_TABLE_REFRESH_HISTORY(
    NAME => 'MY_DB.MY_SCHEMA.ORDER_SUMMARY',
    DATA_TIMESTAMP_START => DATEADD('hour', -6, CURRENT_TIMESTAMP())
))
ORDER BY DATA_TIMESTAMP DESC;

REFRESH_MODE: AUTO, FULL, and INCREMENTAL


Dynamic Tables support three refresh modes, controlled by the REFRESH_MODE parameter:



The practical tip: start with REFRESH_MODE = AUTO and check the refresh history. If you see refresh_trigger showing FULL when you expected INCREMENTAL, your query probably has a construct that doesn't support incremental (like certain window functions or UNION ALL). At that point, decide whether to rewrite the query or accept full refresh.


When Dynamic Tables Are the Right Choice


Dynamic Tables work best when your pipeline is essentially a chain of SQL transformations and you want Snowflake to handle the scheduling and change tracking. Specific use cases:



When to Stick with Streams/Tasks


Dynamic Tables have real limitations. Here's when you need the Streams/Tasks pattern instead:



Monitoring and Troubleshooting


Dynamic Tables are managed by Snowflake's internal scheduler, which means you don't control exactly when refreshes happen. That's great until something goes wrong. Here's how to stay on top of it:


Check current lag and status:

SQL - Dynamic Table status
SHOW DYNAMIC TABLES LIKE 'ORDER_SUMMARY' IN SCHEMA my_db.my_schema;

-- Key columns to check:
-- SCHEDULING_STATE: RUNNING, SUSPENDED, or FAILED
-- DATA_TIMESTAMP: When the data was last refreshed
-- TARGET_LAG: Your configured lag

If a Dynamic Table falls behind: Check whether the warehouse is undersized (refresh takes longer than the target lag), whether the source data volume spiked, or whether there's a resource contention issue from other workloads on the same warehouse. A dedicated warehouse for Dynamic Table refreshes is usually worth the cost on production workloads.


Common Gotchas



Key Takeaways


Pranay Vatsal, Founder & CEO

Pranay Vatsal is the Founder & CEO of CelestInfo with deep expertise in Snowflake, data architecture, and building production-grade data systems for global enterprises.

Related Articles

Frequently Asked Questions

Q: What is a Snowflake Dynamic Table?

A Dynamic Table is a declarative pipeline object in Snowflake that automatically materializes the results of a SQL query and keeps them refreshed based on a target_lag setting. You define the transformation as a SELECT statement and Snowflake handles the refresh scheduling and incremental logic automatically.

Q: What does target_lag mean in Dynamic Tables?

target_lag defines the maximum staleness you'll accept. Setting target_lag = '10 minutes' means Snowflake guarantees the table will never be more than 10 minutes behind its source data. Snowflake's internal scheduler determines the actual refresh frequency needed to meet this target.

Q: Can Dynamic Tables replace dbt models?

They can replace some dbt models - specifically simple transformations that don't require Jinja logic, custom tests, or documentation. Many teams use both: Dynamic Tables for near-real-time materialization and dbt for governed, tested transformations with version control and CI/CD.

Q: What are the limitations of Snowflake Dynamic Tables?

Dynamic Tables don't support JavaScript UDFs, MERGE operations, stored procedures, or external function calls in their defining query. They also can't read from external tables or stages directly. If your transformation needs any of these, you'll need Streams and Tasks instead.