Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
We have an Azure SQL Database that contains a large portion of our data.
I've been able to successfully set up a mirror of the database in a workspace.
I'm fairly new to Fabric and not sure how to proceed given that the data needs some transformation before it can be used in Power BI.
These are the two options I'm thinking of but there may be a another option that is better.
Option 1
Create views inside the Mirror SQL endpoint where we can filter the data and add some columns that are calculated. Use these views inside a model. Not sure of performance using this method.
Option 2
Move the data from the mirror to a lakehouse or warehouse and do the transformation along the way. Have experimented a little with this but the SQL endpoint doesn't seem to show up as a source in a pipeline or Dataflow Gen 2.
Any recommendation would be greatly appreciated.
Thank you.
Solved! Go to Solution.
@acenter41, Thanks for reaching Fabric community support.
For straightforward transformations, Dataflows Gen2 offers a user-friendly interface with scheduling capabilities.
For efficient processing of large datasets with frequent changes, Incremental Data Loads with CDC is recommended.
For complex transformations requiring advanced scripting, Notebooks provides the necessary flexibility.
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
Since Data Pipelines/Dataflows don’t directly support Mirror as a source, use a Fabric Notebook to manually extract and load data.
Hi @acenter41, As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
Hi @acenter41, As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
Hi @acenter41,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
@acenter41, Thanks for reaching Fabric community support.
For straightforward transformations, Dataflows Gen2 offers a user-friendly interface with scheduling capabilities.
For efficient processing of large datasets with frequent changes, Incremental Data Loads with CDC is recommended.
For complex transformations requiring advanced scripting, Notebooks provides the necessary flexibility.
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
Hello @acenter41
go for hybrid
Create views in the mirrored SQL endpoint to handle lightweight transformations like filtering, basic calculations, or aggregations. This minimizes data movement and leverages the existing mirrored database.
For more complex transformations (e.g., combining datasets, advanced calculations), move data to a Lakehouse or Warehouse.
Connect Power BI directly to the Lakehouse/Warehouse for reports requiring extensive transformations
For simpler reports, connect Power BI directly to SQL views using DirectQuery mode.
Where you mention to move data to a Lakehouse or Warehouse for complex transformations, how would you move the data from the Mirror to the lh/wh? The mirror does not seem to appear as a source when trying to move data from it using a pipeline or dataflow.
Agreed with @v-prasare, Notebooks would be a viable option for moving data (and performing complex transformation, depending on what you're looking to do).
Should you decide to go with notebooks, be sure to get familiar with the NotebookUtils library, which may help with accessing your lakehouses: https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
Since Data Pipelines/Dataflows don’t directly support Mirror as a source, use a Fabric Notebook to manually extract and load data.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
5 | |
3 | |
3 | |
3 | |
2 |