Snowflake Cost Optimization: 12 Things That Actually Move the Needle
Last updated: December 2025
Quick answer: The three highest-impact Snowflake cost optimizations are: (1) set auto-suspend to 60 seconds on all warehouses, (2) right-size warehouses based on actual query profiles instead of guessing, and (3) set up resource monitors with hard credit limits. Most teams can cut 20-40% of their Snowflake bill with just these three changes.
Why Your Snowflake Bill Keeps Climbing
Snowflake costs spiral for three predictable reasons: oversized warehouses, missing auto-suspend settings, and queries that bypass the result cache. The 12 optimizations below target these exact issues — auto-suspend to 60 seconds saves 15-30% alone, right-sizing warehouses from XL to Medium cuts another 20%, and cluster keys on tables over 1TB can reduce scan costs by 50-70%. Here’s each one with the SQL to implement it. That's roughly 576 credits ($1,728 at on-demand pricing) for doing absolutely nothing. We've seen this pattern at nearly every Snowflake account we've audited. Let's go through what actually moves the needle.
1. Set Auto-Suspend to 60 Seconds (Not the Default 600)
Snowflake's default auto-suspend is 10 minutes (600 seconds). That means after your last query finishes, the warehouse sits idle - burning credits - for 10 full minutes. For interactive analytics workloads, 60 seconds is almost always the right number. The warehouse spins back up in under 5 seconds, so users barely notice.
ALTER WAREHOUSE analytics_wh
SET AUTO_SUSPEND = 60
AUTO_RESUME = TRUE;
The exception: ETL warehouses that run jobs every 2-3 minutes. Suspending and resuming between short-interval jobs wastes more on cold-start overhead than keeping the warehouse warm. For those, set auto-suspend to match your job interval.
2. Right-Size Your Warehouses
Most teams start with MEDIUM or LARGE warehouses because it feels safe. But here's the thing: doubling your warehouse size doubles your credit consumption per second. A LARGE warehouse costs 8 credits/hour; an X-SMALL costs 1 credit/hour. If your typical query runs in 5 seconds on LARGE and 20 seconds on X-SMALL, the X-SMALL actually costs less total credits for that query. The right approach is to profile your actual queries. Check SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY, look at average execution time and bytes scanned, and size accordingly. Most analytical workloads do fine on SMALL.
3. Use Resource Monitors with Hard Limits
Resource monitors are Snowflake's built-in budget controls. Most people set them up with notification-only thresholds, get the email, and ignore it. Set a hard limit too. When the monitor hits 100%, the warehouse suspends and can't resume until the next interval. Aggressive? Yes. But it's the only way to guarantee you won't blow past your budget.
CREATE RESOURCE MONITOR monthly_budget
WITH CREDIT_QUOTA = 5000
FREQUENCY = MONTHLY
START_TIMESTAMP = IMMEDIATELY
TRIGGERS
ON 75 PERCENT DO NOTIFY
ON 90 PERCENT DO NOTIFY
ON 100 PERCENT DO SUSPEND;
ALTER WAREHOUSE analytics_wh SET RESOURCE_MONITOR = monthly_budget;
4. Stop Using ACCOUNTADMIN for Everyday Queries
This sounds like a security tip, but it's a cost tip too. When users run queries as ACCOUNTADMIN, they often default to whatever warehouse was last used by an admin - typically a large, expensive one. Create role-specific warehouses. The analytics team gets analytics_wh (SMALL, auto-suspend 60s). The ETL service account gets etl_wh (MEDIUM, auto-suspend 300s). Nobody runs ad-hoc queries on the ETL warehouse because their role doesn't have access to it.
5. Add Cluster Keys Only When They're Actually Needed
Cluster keys aren't free. Snowflake's automatic reclustering service runs in the background and consumes credits. For tables under 1TB, Snowflake's natural micro-partitioning is usually sufficient. Only add cluster keys to tables that are (a) larger than 1TB, (b) have a clear, consistent filter pattern in the WHERE clause (like a date column or a tenant_id), and (c) show poor partition pruning in the query profile. Check the query profile's "Partitions scanned vs. total" metric before adding a cluster key.
6. Use Transient Tables for Staging and Temp Data
Permanent tables in Snowflake come with Fail-safe: 7 days of storage that you can't access (only Snowflake support can) but you're still paying for. Transient tables skip Fail-safe entirely. For staging tables, temp tables, and anything you can easily recreate, use transient tables. On a 500GB staging schema, switching to transient saves you 500GB * 7 days worth of Fail-safe storage costs every month.
CREATE TRANSIENT TABLE staging.raw_events ( event_id VARCHAR, event_ts TIMESTAMP_NTZ, payload VARIANT ) DATA_RETENTION_TIME_IN_DAYS = 1;
7. Set Data Retention to 1 Day on Staging Tables
Time Travel defaults to 1 day on Standard edition and up to 90 days on Enterprise. For staging tables that get truncated and reloaded, there's no reason to keep 90 days of Time Travel data. Set DATA_RETENTION_TIME_IN_DAYS = 1 on staging schemas. You're paying for every version of every row that gets stored during the retention window. On high-churn tables, this adds up fast.
8. Monitor with WAREHOUSE_METERING_HISTORY
You can't optimize what you don't measure. This view shows credit consumption by warehouse, by hour. Run this query weekly to spot warehouses that are burning credits during off-hours or consuming more than expected.
SELECT
warehouse_name,
DATE_TRUNC('day', start_time) AS usage_date,
SUM(credits_used) AS daily_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY warehouse_name, usage_date
ORDER BY daily_credits DESC;
9. Use LIMIT During Development
This one's embarrassingly simple, but it matters. When you're developing and testing queries, add LIMIT 1000. Snowflake charges for the compute used to scan data, and a full table scan on a 2TB table costs the same whether you look at the results or not. During development, you don't need all 500 million rows - you need to know if your SQL logic is correct. Train your team to use LIMIT during development and remove it for production runs.
10. Understand Snowflake's Three Cache Layers
Snowflake has three caches, and understanding them helps you avoid paying for redundant compute:
- Result cache (24 hours): If you run the exact same query and the underlying data hasn't changed, Snowflake returns the cached result instantly - no warehouse needed, no credits consumed. This is why dashboard refreshes are often free.
- Local disk cache (warehouse cache): Data read from remote storage gets cached on the warehouse's local SSD. Subsequent queries hitting the same data are faster. But here's the gotcha: if you suspend and resume the warehouse, this cache is wiped. That's the tradeoff with aggressive auto-suspend.
- Metadata cache: MIN, MAX, COUNT, and other aggregate operations on columns are served from metadata without scanning data. A
SELECT COUNT(*) FROM big_tableis essentially free.
11. Pick the Right Snowflake Edition
Standard edition costs roughly 25-30% less per credit than Enterprise. If you don't need multi-cluster warehouses, 90-day Time Travel, column-level security, or dynamic data masking, Standard is fine. We've seen companies on Enterprise edition that use zero Enterprise-only features. That's a 25% premium for nothing. Check what features you're actually using before your next contract renewal.
12. Check Auto-Clustering Before Manually Clustering
Snowflake automatically organizes data into micro-partitions as it's loaded. For most tables, this natural clustering is good enough. Before you add explicit cluster keys, check the clustering depth with SYSTEM$CLUSTERING_DEPTH. If the depth is already low (1-3), adding a cluster key won't help - and it'll cost you reclustering credits for no benefit.
Bonus: Find Unused Tables with ACCESS_HISTORY
Tables that nobody queries are still costing you storage. The ACCESS_HISTORY view (Enterprise edition) shows which tables have been accessed and when. Cross-reference this with your table list to find candidates for archival or deletion.
SELECT t.table_schema, t.table_name, t.bytes
FROM SNOWFLAKE.ACCOUNT_USAGE.TABLES t
WHERE t.deleted IS NULL
AND t.table_catalog = 'YOUR_DATABASE'
AND NOT EXISTS (
SELECT 1
FROM SNOWFLAKE.ACCOUNT_USAGE.ACCESS_HISTORY ah,
LATERAL FLATTEN(ah.base_objects_accessed) f
WHERE f.value:objectName::STRING = t.table_catalog || '.' || t.table_schema || '.' || t.table_name
AND ah.query_start_time >= DATEADD('day', -90, CURRENT_TIMESTAMP())
)
ORDER BY t.bytes DESC;
Conclusion
None of these optimizations require a major architecture change. Most take less than an hour to implement. Start with auto-suspend (tip #1), resource monitors (tip #3), and WAREHOUSE_METERING_HISTORY (tip #8). Those three will give you visibility into where your credits are going and put guardrails in place to prevent runaway costs. Everything else is incremental improvement on top of a solid foundation.
