Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount.
Register nowGet certified as a Fabric Data Engineer: Check your eligibility for a 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700. Get started
Hello,
I'm working on a project in Microsoft Fabric where I need to consolidate data from multiple Oracle servers to create custom Power BI reports for my clients. Each customer needs to access their own data without it mixing with the others, and I want to automate the data source switching without manual intervention.
I'm currently using Gen2 dataflows and pipelines, but I'm looking for a solution to dynamically parameterize connections to client servers automatically, while avoiding the need to recreate dataflow gen 2 and voluminous semantic schemas for each client. My aim is to ensure that each client's data is updated without manual intervention, and that Power BI reports are always up to date.
Note that customer databases are identical in structure: table names and relationships are the same, only the data changes.
What are the best practices or tools in Fabric for implementing this dynamic automation of data sources? Any tips for avoiding duplicate lakehouses and simplifying this management?
Thanks a lot!
Solved! Go to Solution.
Hi @TesnimeBS_,
Have you considered creating a single sementic model for your Power BI report and apply Row Level Security (RLS) on that report: https://learn.microsoft.com/en-us/fabric/security/service-admin-row-level-security? You could then write all your data to the same model and mitigate voluminous semantic schemas.
You still need to create separate connections to the servers, but you could use a loop in the data pipeline to select each of the servers automatically. Then, your data pipeline remains managable and clear.
Kind regards,
Fabian
Hi @TesnimeBS_,
You could for example create a CSV file within your Lakehouse that holds all the Fabric connection ID's that you created for your different servers. With a Lookup within the data pipeline, you could get the rows from the CSV a use that in the ForEach activity to get the Fabric connection ID's per row. Then you could use the Dynamic Content option within the copy activity to use that ID as the connection. An example is given here:
https://community.fabric.microsoft.com/t5/Data-Pipelines/Dynamically-set-Copy-data-Destination-Conne...
HI @FabianSchut
Thank you for your response!
Could you please guide me on the following points:
How to configure a Lookup activity to dynamically retrieve server information?
How can I create a loop in a data pipeline to automatically switch between servers?
How to pass this dynamic information (server name, database, etc.) into a data copy or extraction step?
If you have any concrete examples or links to tutorials, that would be greatly appreciated!
Thank you in advance for your help!
Kind regards,
Tesnime
Hi @TesnimeBS_,
You could for example create a CSV file within your Lakehouse that holds all the Fabric connection ID's that you created for your different servers. With a Lookup within the data pipeline, you could get the rows from the CSV a use that in the ForEach activity to get the Fabric connection ID's per row. Then you could use the Dynamic Content option within the copy activity to use that ID as the connection. An example is given here:
https://community.fabric.microsoft.com/t5/Data-Pipelines/Dynamically-set-Copy-data-Destination-Conne...
Hi @TesnimeBS_,
Have you considered creating a single sementic model for your Power BI report and apply Row Level Security (RLS) on that report: https://learn.microsoft.com/en-us/fabric/security/service-admin-row-level-security? You could then write all your data to the same model and mitigate voluminous semantic schemas.
You still need to create separate connections to the servers, but you could use a loop in the data pipeline to select each of the servers automatically. Then, your data pipeline remains managable and clear.
Kind regards,
Fabian