Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
dbWizard
Advocate I
Advocate I

Redundant Mirrored Database Data for Downstream Dependency Protection

Problem: Downstream dependencies such as reports (reports are not directly pointed at bronze) and processes break when a Mirrored Database catastrophically fails (capacity maxed out, owner no longer at company, etc) and requires a full rebuild. This seldom happens but coming from an HA/DR background, it is impertive to protect production availability. 

 

My solution: Copy data from mirrored database in fabric to a lakehouse in fabric, 1 for 1. All downstream processes and dependencies (silver transforms etc) point at the bronze lakehouse instead of directly referencing the mirrored database. Yes I relize I am making a copy of a copy but we have been burnt by mirroring failing. If it was able to persist the data after a rebuild then that would be fine... but it doesn't.

 

Thoughts? Support? Poke holes in my idea?

 

Thanks!

1 ACCEPTED SOLUTION

Hey thanks for reaching out... No I don't think I've really got an answer. I think it's just a current limitation of fabric... I think the real solution is to make it to where ownership can be transferred of a mirrored database or make it to where the item id doesn't get destroyed if you have to rebuild a mirrored database. I'll mark this as solved.

View solution in original post

7 REPLIES 7
v-csrikanth
Community Support
Community Support

Hi @dbWizard 

It's been a while since I heard back from you and I wanted to follow up. Have you had a chance to try the solutions that have been offered?
If the issue has been resolved, can you mark the post as resolved? If you're still experiencing challenges, please feel free to let us know and we'll be happy to continue to help!
Looking forward to your reply!

Best Regards,
Community Support Team _ C Srikanth.

Hey thanks for reaching out... No I don't think I've really got an answer. I think it's just a current limitation of fabric... I think the real solution is to make it to where ownership can be transferred of a mirrored database or make it to where the item id doesn't get destroyed if you have to rebuild a mirrored database. I'll mark this as solved.

v-csrikanth
Community Support
Community Support

Hi @dbWizard 
Thanks for bringing this to the Community!

Your bronze-lakehouse copy pattern is a solid way to insulate downstream analytics from a catastrophic mirrored-DB failure—but here are a few considerations and alternative ideas to weigh:

  • Making a 1:1 copy does safeguard reports and transforms, but it does incur extra storage and lineage overhead.

  • Incremental sync instead of full copy: Use Fabric Data Pipelines or Dataflow Gen2 with watermark parameters (RangeStart/RangeEnd) to only load changed or new rows, reducing cost and latency.

  • Snapshot isolation on the mirrored DB: If your mirrored database supports database snapshots or copy-only backups, you can snapshot to a durable store (OneLake, Blob) on a schedule, then point Bronze at those snapshots rather than re-copying live data.

  • Auto-failover groups or geo-replication: For critical mirrored DBs, consider Azure SQL’s auto-failover group feature to provide transparent DR and eliminate manual rebuilds.

  • OneLake shortcuts (if available): Instead of physically copying, use OneLake’s “shortcut” feature so Bronze points to a durable copy location outside the live mirror, yet doesn’t duplicate storage.

  • Monitoring & alerting: Whatever pattern you choose, build health checks (pipeline success, data freshness) to automatically detect and remediate sync failures before they affect consumers.

Each of these can reduce cost, simplify lineage, or provide more seamless DR—while still keeping Bronze as your single source for all downstream dependencies.


If this helps, please give us Kudos and mark the response as Accepted as solution.
Best Regards,
Community Support Team _ C Srikanth.

shivani111
Frequent Visitor

could you please explain how you are doing 'Copy data from mirrored database in fabric to a lakehouse in fabric'

notebook.

nilendraFabric
Community Champion
Community Champion

Hi @dbWizard 

 

Your approach is technically sound for low tolerance for analytics downtime.

 

But please keep these points in mind Each additional data copy increases storage costs and introduces potential synchronisation lag between the operational source and analytics consumers. The redundancy also creates additional complexity in data lineage tracking and governance, as organisations must now manage two separate data stores that theoretically contain identical information

 

 

mirroring is near real time,try to implementing incremental data loading patterns that detect changes in the mirrored database and apply them to the lakehouse in near real-time

Thank you for your input. The delays are definitely a considerations but downstream processes breaking I think outweighs any potential slowness in data moving around. I realize it's best to have one copy of data.. however, I need to be able to count on the data being there and the process running smoothly. Reseeding a db is a huge hit on IO at source as well. Hopefully more improvements come. 

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.