Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
SavioFerraz
Kudo Kingpin
Kudo Kingpin

DW Mirroring Refresh Latency Increasing Over Time — Is This Expected in Production Workloads?

We mirrored a SQL Database into Fabric Warehouse for reporting. For the first few weeks, latency stayed around 2–3 seconds, but now we’re regularly seeing 20–45+ seconds during business hours.

No structural changes were made on the source system.

Is this normal as throughput increases, or should we review our setup?
Anyone found a way to stabilize mirror latency for high-transaction workloads?

2 ACCEPTED SOLUTIONS
Mauro89
Super User
Super User

Hi @SavioFerraz,

 

Yes — this behavior can occur in production workloads. DW Mirroring latency is not fixed and will increase based on ingestion pressure and resource availability. As transaction volume grows during peak hours, the mirroring pipeline must process a larger backlog of CDC logs, which leads to the kind of 20–45 second delays you’re seeing.

 

A few factors commonly drive rising latency:

 

  • Higher transaction throughput on source database (more commit events to sync)
  • Batch size growth in the underlying capture/replication process
  • Fabric capacity pressure, especially if Warehouse resources are shared
  • Historical backlog of CDC changes that were not fully caught up off-hours
  • Complex workloads like frequent index rebuilds, blocking, or large bulk loads

 

Recommended monitoring

 

 

Enable and track:

 

  • Warehouse Utilization metrics
  • Mirror refresh latency trend
  • Source DB log flush waits

 

This will help correlate latency spikes with capacity pressure or workload bursts.

 

Best regards!

 

If this post helps, consider leaving kudos or mark as solution.

View solution in original post

Nabha-Ahmed
Super User
Super User

Hi @SavioFerraz 

It’s not expected for mirror latency to jump from 2–3 seconds to 20–45 seconds. This usually means the source Change Tracking or the Fabric mirror compute is falling behind.

 

A few things to check that typically stabilize high-transaction workloads:

  • Verify the source SQL isn’t hitting CT cleanup, log write pressure, or blocking.
  • Check Fabric capacity load during business hours,mirrors slow down when capacity is saturated.
  • Consider increasing the mirror compute tier if delta volume has grown.

When CT is healthy and capacity isn’t overloaded, mirror latency stays in single-digit seconds. If all looks normal, opening a support ticket is recommended.

If this help you mark [AS SOLUTION] to help other and receive kudo

Best regards 

Nabha Ahmed 

 

View solution in original post

3 REPLIES 3
v-prasare
Community Support
Community Support

Hi @SavioFerraz,

We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.

 

 

Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support

Nabha-Ahmed
Super User
Super User

Hi @SavioFerraz 

It’s not expected for mirror latency to jump from 2–3 seconds to 20–45 seconds. This usually means the source Change Tracking or the Fabric mirror compute is falling behind.

 

A few things to check that typically stabilize high-transaction workloads:

  • Verify the source SQL isn’t hitting CT cleanup, log write pressure, or blocking.
  • Check Fabric capacity load during business hours,mirrors slow down when capacity is saturated.
  • Consider increasing the mirror compute tier if delta volume has grown.

When CT is healthy and capacity isn’t overloaded, mirror latency stays in single-digit seconds. If all looks normal, opening a support ticket is recommended.

If this help you mark [AS SOLUTION] to help other and receive kudo

Best regards 

Nabha Ahmed 

 

Mauro89
Super User
Super User

Hi @SavioFerraz,

 

Yes — this behavior can occur in production workloads. DW Mirroring latency is not fixed and will increase based on ingestion pressure and resource availability. As transaction volume grows during peak hours, the mirroring pipeline must process a larger backlog of CDC logs, which leads to the kind of 20–45 second delays you’re seeing.

 

A few factors commonly drive rising latency:

 

  • Higher transaction throughput on source database (more commit events to sync)
  • Batch size growth in the underlying capture/replication process
  • Fabric capacity pressure, especially if Warehouse resources are shared
  • Historical backlog of CDC changes that were not fully caught up off-hours
  • Complex workloads like frequent index rebuilds, blocking, or large bulk loads

 

Recommended monitoring

 

 

Enable and track:

 

  • Warehouse Utilization metrics
  • Mirror refresh latency trend
  • Source DB log flush waits

 

This will help correlate latency spikes with capacity pressure or workload bursts.

 

Best regards!

 

If this post helps, consider leaving kudos or mark as solution.

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.