Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get certified as a Fabric Data Engineer: Check your eligibility for a 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700. Get started

Reply
TesnimeBS_
Frequent Visitor

Automating dynamic data source changes in Microsoft Fabric with Dataflows Gen2 and pipelines

Hello,

I'm working on a project in Microsoft Fabric where I need to consolidate data from multiple Oracle servers to create custom Power BI reports for my clients. Each customer needs to access their own data without it mixing with the others, and I want to automate the data source switching without manual intervention.

I'm currently using Gen2 dataflows and pipelines, but I'm looking for a solution to dynamically parameterize connections to client servers automatically, while avoiding the need to recreate dataflow gen 2 and voluminous semantic schemas for each client. My aim is to ensure that each client's data is updated without manual intervention, and that Power BI reports are always up to date.

Note that customer databases are identical in structure: table names and relationships are the same, only the data changes.

What are the best practices or tools in Fabric for implementing this dynamic automation of data sources? Any tips for avoiding duplicate lakehouses and simplifying this management?

Thanks a lot!

2 ACCEPTED SOLUTIONS
FabianSchut
Super User
Super User

Hi @TesnimeBS_,

 

Have you considered creating a single sementic model for your Power BI report and apply Row Level Security (RLS) on that report: https://learn.microsoft.com/en-us/fabric/security/service-admin-row-level-security? You could then write all your data to the same model and mitigate voluminous semantic schemas.

 

You still need to create separate connections to the servers, but you could use a loop in the data pipeline to select each of the servers automatically. Then, your data pipeline remains managable and clear.

 

Kind regards,

Fabian

View solution in original post

Hi @TesnimeBS_,

 

You could for example create a CSV file within your Lakehouse that holds all the Fabric connection ID's that you created for your different servers. With a Lookup within the data pipeline, you could get the rows from the CSV a use that in the ForEach activity to get the Fabric connection ID's per row. Then you could use the Dynamic Content option within the copy activity to use that ID as the connection. An example is given here:
https://community.fabric.microsoft.com/t5/Data-Pipelines/Dynamically-set-Copy-data-Destination-Conne...

View solution in original post

3 REPLIES 3
TesnimeBS_
Frequent Visitor

HI @FabianSchut 

 

Thank you for your response!

 

Could you please guide me on the following points:

How to configure a Lookup activity to dynamically retrieve server information?
How can I create a loop in a data pipeline to automatically switch between servers?
How to pass this dynamic information (server name, database, etc.) into a data copy or extraction step?
If you have any concrete examples or links to tutorials, that would be greatly appreciated!

 

Thank you in advance for your help!

 

Kind regards, 

Tesnime

Hi @TesnimeBS_,

 

You could for example create a CSV file within your Lakehouse that holds all the Fabric connection ID's that you created for your different servers. With a Lookup within the data pipeline, you could get the rows from the CSV a use that in the ForEach activity to get the Fabric connection ID's per row. Then you could use the Dynamic Content option within the copy activity to use that ID as the connection. An example is given here:
https://community.fabric.microsoft.com/t5/Data-Pipelines/Dynamically-set-Copy-data-Destination-Conne...

FabianSchut
Super User
Super User

Hi @TesnimeBS_,

 

Have you considered creating a single sementic model for your Power BI report and apply Row Level Security (RLS) on that report: https://learn.microsoft.com/en-us/fabric/security/service-admin-row-level-security? You could then write all your data to the same model and mitigate voluminous semantic schemas.

 

You still need to create separate connections to the servers, but you could use a loop in the data pipeline to select each of the servers automatically. Then, your data pipeline remains managable and clear.

 

Kind regards,

Fabian

Helpful resources

Announcements
Feb2025 Sticker Challenge

Join our Community Sticker Challenge 2025

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

JanFabricDE_carousel

Fabric Monthly Update - January 2025

Explore the power of Python Notebooks in Fabric!

JanFabricDW_carousel

Fabric Monthly Update - January 2025

Unlock the latest Fabric Data Warehouse upgrades!