Snowflake Dynamic Tables: When to Use Them and When to Stick with Streams/Tasks
Quick answer: Dynamic Tables are Snowflake's declarative approach to incremental pipelines - you write a SELECT, set a target_lag, and Snowflake handles the refresh. They're great for simple transformations like star schema refreshes. Stick with Streams/Tasks when you need multi-step logic, MERGE operations, external API calls, or JavaScript UDFs.
Last updated: January 2026
Snowflake Dynamic Tables automatically refresh materialized query results on a schedule you define — replacing manual streams + tasks pipelines with a single CREATE DYNAMIC TABLE statement. They’re best for transformation layers (staging → marts) where data freshness of 1-60 minutes is acceptable. Use streams + tasks instead when you need sub-minute latency or complex multi-step CDC logic. Here’s the comparison with implementation examples. to track changes, a Task to schedule processing, and a stored procedure to run the actual SQL, you just write a CREATE DYNAMIC TABLE statement with a SELECT query and a freshness target. Snowflake figures out the rest.
That's the theory. In practice, there are real tradeoffs between Dynamic Tables and the Streams/Tasks pattern that determine which one you should use for a given pipeline. This guide walks through both approaches with enough detail that you can make that call for your own workloads.
What Dynamic Tables Actually Do
A Dynamic Table is a materialized result set that Snowflake keeps up to date automatically. You define it with a SQL SELECT statement and a target_lag parameter that tells Snowflake how stale the data is allowed to get. Snowflake's internal scheduler then figures out how often to refresh the table to meet that target.
Here's the key mental model: a Dynamic Table is like a materialized view that you can control the refresh frequency of. Except unlike materialized views in Snowflake (which are best for simple queries on a single table), Dynamic Tables support joins, aggregations, window functions, and CTEs.
Here's a real example. Say you have a raw orders table and a raw customers table, and you want a denormalized order_summary that joins them and adds some computed columns:
CREATE OR REPLACE DYNAMIC TABLE order_summary
TARGET_LAG = '10 minutes'
WAREHOUSE = transform_wh
AS
SELECT
o.order_id,
o.order_date,
o.total_amount,
c.customer_name,
c.segment,
o.total_amount - COALESCE(o.discount, 0) AS net_amount,
DATEDIFF('day', c.first_order_date, o.order_date) AS customer_tenure_days
FROM raw.orders o
JOIN raw.customers c ON o.customer_id = c.customer_id
WHERE o.status != 'cancelled';
That's it. No Stream, no Task, no stored procedure, no scheduling config. Snowflake will keep order_summary within 10 minutes of the source tables. When new rows land in raw.orders or raw.customers, Snowflake detects the change and refreshes the Dynamic Table on the transform_wh warehouse.
Understanding target_lag
The target_lag parameter is the most important configuration on a Dynamic Table. It doesn't set a fixed schedule - it sets a freshness guarantee. Snowflake's scheduler decides when and how often to refresh, based on how frequently the source data changes and how long each refresh takes.
You can set it as a time interval or use the special keyword DOWNSTREAM:
- Time-based:
TARGET_LAG = '5 minutes','1 hour','1 day'. The minimum is 1 minute. - DOWNSTREAM:
TARGET_LAG = DOWNSTREAM. This means the Dynamic Table only refreshes when a downstream Dynamic Table that depends on it triggers a refresh. Useful for intermediate tables in a chain.
A gotcha: setting target_lag = '1 minute' doesn't mean the table refreshes every minute. If a refresh takes 3 minutes to complete, Snowflake will start the next one immediately after the previous finishes, but you'll see actual lag of 3+ minutes. The target is a goal, not a hard guarantee. Monitor actual lag with:
SELECT *
FROM TABLE(INFORMATION_SCHEMA.DYNAMIC_TABLE_REFRESH_HISTORY(
NAME => 'MY_DB.MY_SCHEMA.ORDER_SUMMARY',
DATA_TIMESTAMP_START => DATEADD('hour', -6, CURRENT_TIMESTAMP())
))
ORDER BY DATA_TIMESTAMP DESC;
REFRESH_MODE: AUTO, FULL, and INCREMENTAL
Dynamic Tables support three refresh modes, controlled by the REFRESH_MODE parameter:
- AUTO (default): Snowflake decides whether to use incremental or full refresh based on the query structure. If it can process only changed rows, it will. If the query is too complex for incremental processing, it falls back to full refresh.
- INCREMENTAL: Forces incremental refresh. Snowflake will error at creation time if the query doesn't support incremental processing. Use this when you want to be sure you're not accidentally doing full refreshes on a large table.
- FULL: Forces a complete rebuild every refresh cycle. Use this when the transformation logic requires it (e.g., window functions that recompute over the full dataset) or when the table is small enough that full refresh is faster than tracking changes.
The practical tip: start with REFRESH_MODE = AUTO and check the refresh history. If you see refresh_trigger showing FULL when you expected INCREMENTAL, your query probably has a construct that doesn't support incremental (like certain window functions or UNION ALL). At that point, decide whether to rewrite the query or accept full refresh.
When Dynamic Tables Are the Right Choice
Dynamic Tables work best when your pipeline is essentially a chain of SQL transformations and you want Snowflake to handle the scheduling and change tracking. Specific use cases:
- Star schema dimension and fact table refreshes. If you're building a star schema from raw tables, each dimension and fact table can be a Dynamic Table with appropriate lag. Dimensions might have
target_lag = '1 hour'while fact tables use'10 minutes'. - Denormalized reporting tables. The classic "join 5 tables, add computed columns, land it in a flat table for BI" pattern. Dynamic Tables handle this well.
- Multi-hop pipelines where you want declarative dependencies. If Dynamic Table B reads from Dynamic Table A, Snowflake automatically chains the refreshes. You don't need to manage Task dependencies manually.
- Replacing simple dbt models that don't need testing or documentation. If you have dbt models that are purely SQL transformations without custom tests, Dynamic Tables can replace them and give you near-real-time freshness without a scheduler.
When to Stick with Streams/Tasks
Dynamic Tables have real limitations. Here's when you need the Streams/Tasks pattern instead:
- MERGE operations (upserts). Dynamic Tables don't support MERGE. If your pipeline needs to update existing rows based on a business key, you need a Task that runs a MERGE statement against a Stream.
- JavaScript UDFs or external functions. Dynamic Tables can't call JavaScript UDFs, Python UDFs, or external functions. If your transformation involves calling an ML model endpoint or running custom logic that can't be expressed in SQL, Streams/Tasks is the way.
- Multi-step procedural logic. If your pipeline involves conditional branching (do X if the data looks like this, do Y otherwise), stored procedures called by Tasks are more appropriate.
- Writing to multiple targets from a single change set. A Stream can be consumed by multiple Tasks, each writing to a different target table. Dynamic Tables produce a single output.
- External API calls or notifications. Need to send a webhook or call an API when data changes? That's a Task with a stored procedure, not a Dynamic Table.
Monitoring and Troubleshooting
Dynamic Tables are managed by Snowflake's internal scheduler, which means you don't control exactly when refreshes happen. That's great until something goes wrong. Here's how to stay on top of it:
Check current lag and status:
SHOW DYNAMIC TABLES LIKE 'ORDER_SUMMARY' IN SCHEMA my_db.my_schema; -- Key columns to check: -- SCHEDULING_STATE: RUNNING, SUSPENDED, or FAILED -- DATA_TIMESTAMP: When the data was last refreshed -- TARGET_LAG: Your configured lag
If a Dynamic Table falls behind: Check whether the warehouse is undersized (refresh takes longer than the target lag), whether the source data volume spiked, or whether there's a resource contention issue from other workloads on the same warehouse. A dedicated warehouse for Dynamic Table refreshes is usually worth the cost on production workloads.
Common Gotchas
- You can't use JavaScript UDFs. This catches people who have existing pipelines with JS UDFs for string parsing or date conversion. You'll need to rewrite those as SQL expressions or switch to Streams/Tasks.
- No direct MERGE support. If your transformation needs to upsert, Dynamic Tables won't work. The workaround - using a Dynamic Table as a staging area and then a Task to MERGE into the final table - adds complexity that defeats the purpose.
- Warehouse costs can sneak up. Each Dynamic Table refresh consumes warehouse compute. If you have 50 Dynamic Tables all refreshing every 5 minutes on the same warehouse, the credits add up. Group related tables on dedicated warehouses and set appropriate auto-suspend timeouts.
- ALTER DYNAMIC TABLE has limited options. You can change
target_lagandwarehousewithout recreating the table, but changing the underlying query requires a CREATE OR REPLACE, which triggers a full refresh.
Key Takeaways
- Dynamic Tables are the right fit for declarative, SQL-only transformation pipelines where you want Snowflake to manage the refresh schedule.
- Use
target_lagto set freshness requirements, not a fixed schedule. Snowflake's scheduler optimizes refresh frequency automatically. - Stick with Streams/Tasks for anything requiring MERGE, JavaScript UDFs, procedural logic, or external function calls.
- Monitor with
DYNAMIC_TABLE_REFRESH_HISTORY()andSHOW DYNAMIC TABLES. Don't assume the target lag is always met - check actual performance. - Start with
REFRESH_MODE = AUTOand verify that Snowflake is using incremental refresh for large tables.
Related Articles
Frequently Asked Questions
Q: What is a Snowflake Dynamic Table?
A Dynamic Table is a declarative pipeline object in Snowflake that automatically materializes the results of a SQL query and keeps them refreshed based on a target_lag setting. You define the transformation as a SELECT statement and Snowflake handles the refresh scheduling and incremental logic automatically.
Q: What does target_lag mean in Dynamic Tables?
target_lag defines the maximum staleness you'll accept. Setting target_lag = '10 minutes' means Snowflake guarantees the table will never be more than 10 minutes behind its source data. Snowflake's internal scheduler determines the actual refresh frequency needed to meet this target.
Q: Can Dynamic Tables replace dbt models?
They can replace some dbt models - specifically simple transformations that don't require Jinja logic, custom tests, or documentation. Many teams use both: Dynamic Tables for near-real-time materialization and dbt for governed, tested transformations with version control and CI/CD.
Q: What are the limitations of Snowflake Dynamic Tables?
Dynamic Tables don't support JavaScript UDFs, MERGE operations, stored procedures, or external function calls in their defining query. They also can't read from external tables or stages directly. If your transformation needs any of these, you'll need Streams and Tasks instead.
