Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
SavioFerraz
Kudo Kingpin
Kudo Kingpin

DW Mirroring Refresh Latency Increasing Over Time — Is This Expected in Production Workloads?

We mirrored a SQL Database into Fabric Warehouse for reporting. For the first few weeks, latency stayed around 2–3 seconds, but now we’re regularly seeing 20–45+ seconds during business hours.

No structural changes were made on the source system.

Is this normal as throughput increases, or should we review our setup?
Anyone found a way to stabilize mirror latency for high-transaction workloads?

2 ACCEPTED SOLUTIONS
Mauro89
Solution Sage
Solution Sage

Hi @SavioFerraz,

 

Yes — this behavior can occur in production workloads. DW Mirroring latency is not fixed and will increase based on ingestion pressure and resource availability. As transaction volume grows during peak hours, the mirroring pipeline must process a larger backlog of CDC logs, which leads to the kind of 20–45 second delays you’re seeing.

 

A few factors commonly drive rising latency:

 

  • Higher transaction throughput on source database (more commit events to sync)
  • Batch size growth in the underlying capture/replication process
  • Fabric capacity pressure, especially if Warehouse resources are shared
  • Historical backlog of CDC changes that were not fully caught up off-hours
  • Complex workloads like frequent index rebuilds, blocking, or large bulk loads

 

Recommended monitoring

 

 

Enable and track:

 

  • Warehouse Utilization metrics
  • Mirror refresh latency trend
  • Source DB log flush waits

 

This will help correlate latency spikes with capacity pressure or workload bursts.

 

Best regards!

 

If this post helps, consider leaving kudos or mark as solution.

View solution in original post

Nabha-Ahmed
Memorable Member
Memorable Member

Hi @SavioFerraz 

It’s not expected for mirror latency to jump from 2–3 seconds to 20–45 seconds. This usually means the source Change Tracking or the Fabric mirror compute is falling behind.

 

A few things to check that typically stabilize high-transaction workloads:

  • Verify the source SQL isn’t hitting CT cleanup, log write pressure, or blocking.
  • Check Fabric capacity load during business hours,mirrors slow down when capacity is saturated.
  • Consider increasing the mirror compute tier if delta volume has grown.

When CT is healthy and capacity isn’t overloaded, mirror latency stays in single-digit seconds. If all looks normal, opening a support ticket is recommended.

If this help you mark [AS SOLUTION] to help other and receive kudo

Best regards 

Nabha Ahmed 

 

View solution in original post

3 REPLIES 3
v-prasare
Community Support
Community Support

Hi @SavioFerraz,

We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.

 

 

Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support

Nabha-Ahmed
Memorable Member
Memorable Member

Hi @SavioFerraz 

It’s not expected for mirror latency to jump from 2–3 seconds to 20–45 seconds. This usually means the source Change Tracking or the Fabric mirror compute is falling behind.

 

A few things to check that typically stabilize high-transaction workloads:

  • Verify the source SQL isn’t hitting CT cleanup, log write pressure, or blocking.
  • Check Fabric capacity load during business hours,mirrors slow down when capacity is saturated.
  • Consider increasing the mirror compute tier if delta volume has grown.

When CT is healthy and capacity isn’t overloaded, mirror latency stays in single-digit seconds. If all looks normal, opening a support ticket is recommended.

If this help you mark [AS SOLUTION] to help other and receive kudo

Best regards 

Nabha Ahmed 

 

Mauro89
Solution Sage
Solution Sage

Hi @SavioFerraz,

 

Yes — this behavior can occur in production workloads. DW Mirroring latency is not fixed and will increase based on ingestion pressure and resource availability. As transaction volume grows during peak hours, the mirroring pipeline must process a larger backlog of CDC logs, which leads to the kind of 20–45 second delays you’re seeing.

 

A few factors commonly drive rising latency:

 

  • Higher transaction throughput on source database (more commit events to sync)
  • Batch size growth in the underlying capture/replication process
  • Fabric capacity pressure, especially if Warehouse resources are shared
  • Historical backlog of CDC changes that were not fully caught up off-hours
  • Complex workloads like frequent index rebuilds, blocking, or large bulk loads

 

Recommended monitoring

 

 

Enable and track:

 

  • Warehouse Utilization metrics
  • Mirror refresh latency trend
  • Source DB log flush waits

 

This will help correlate latency spikes with capacity pressure or workload bursts.

 

Best regards!

 

If this post helps, consider leaving kudos or mark as solution.

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.