BG Image
Streams & Tasks
Jan 21, 2026

Snowflake Tasks: Schedule Wisely to Avoid Constant Compute

Tasks that run too frequently waste credits on unnecessary executions. Optimize your task schedules based on actual data arrival patterns.

Raj
CEO, MaxMyCloud

Snowflake Tasks: Schedule Wisely to Avoid Constant Compute

Tasks that run too frequently waste credits on unnecessary executions. Optimize your task schedules based on actual data arrival patterns.

The Problem

Many teams schedule tasks to run every 5 minutes "just to be sure data is fresh." If data only arrives once per hour, you're paying for 11 unnecessary executions every hour (220 wasted runs per day).

Task Scheduling Strategies

Time-Based Scheduling

-- Every 5 minutes (288 executions/day)
CREATE TASK frequent_task
WAREHOUSE = etl_wh
SCHEDULE = 'USING CRON */5 * * * * UTC'
AS
INSERT INTO target SELECT * FROM source;

-- Daily at 2 AM (1 execution/day)
CREATE TASK daily_task
WAREHOUSE = etl_wh
SCHEDULE = 'USING CRON 0 2 * * * UTC'
AS
INSERT INTO target SELECT * FROM source;

Cost Impact

  • 5-minute task: 288 × 0.1 credits = 28.8 credits/day
  • Hourly task: 24 × 0.1 credits = 2.4 credits/day
  • Daily task: 1 × 0.1 credits = 0.1 credits/day

Stream-Based Scheduling (Event-Driven)

CREATE STREAM source_stream ON TABLE source;

CREATE TASK stream_based_task
WAREHOUSE = etl_wh
SCHEDULE = '5 MINUTE'
WHEN SYSTEM$STREAM_HAS_DATA('source_stream')
AS
INSERT INTO target SELECT * FROM source_stream;

Task only processes when stream has data. Checking is nearly free; processing only happens when needed.

Real-World Example

A data team had 15 tasks running every 5 minutes: 15 tasks × 288 executions/day = 4,320 runs/day, daily cost 216 credits = $648/day = $19,440/month.

After optimization (stream-based execution, 15 min schedule, batched tasks): 15 tasks × 96 executions/day = 1,440 runs/day, 20% actually process data = 288 runs, daily cost 14.4 credits = $43.20/day = $1,296/month. Savings: $18,144/month (93% reduction).

Best Practices

  1. Schedule tasks based on actual data arrival patterns
  2. Use SYSTEM$STREAM_HAS_DATA() for event-driven execution
  3. Use smaller warehouses (X-Small, Small) for tasks
  4. Batch operations instead of processing one row at a time
  5. Monitor task execution frequency vs actual work performed

Key Takeaways

  • Schedule based on data arrival, not "just in case"
  • Use streams for event-driven execution
  • Use smaller warehouses for tasks
  • Batch operations efficiently
  • 80-90% cost reduction possible with proper optimization

Recent blogs

Start Optimizing Your Snowflake Costs Today

Uncover hidden inefficiencies and start reducing Snowflake spend in minutes no disruption, no risk.