Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.

Reply
EduardD
Helper IV
Helper IV

Power BI pipeline for Data Pipeline and lakehouse deployment

Hi everyone, 👋 I’m working with three Power BI workspaces: DEV, UAT, and PROD, each hosting reports and datasets. To handle large volumes of data more efficiently, I’ve started using DirectLake with Lakehouse.

My plan is to build a data pipeline that ingests data from Azure SQL Database into a Lakehouse. Each environment (SIT, UAT, PROD) will have its own Azure SQL DB and Data Lake.

Challenge: When deploying the pipeline from SIT to UAT to PROD, I’m unable to dynamically substitute the connection strings or data source parameters. The pipeline doesn’t seem to support environment-specific configurations out of the box.

Question: What’s the recommended approach to ensure that each pipeline deployment uses its own set of parameters (e.g., Azure SQL DB connection string, Workspace, Lakehouse path, etc.)? Is there a best practice for managing environment-specific settings during deployment?

Thanks in advance for your insights!

1 ACCEPTED SOLUTION
AntoineW
Impactful Individual
Impactful Individual

Hello @EduardD,

 

Good question ! Here is the Recommended approach (works with Fabric Data Pipelines + Variable Library):

  1. Create a Variable Library with per-environment values
    Examples: SqlServer, SqlDb, LakehouseId (or Lakehouse path), optional Container/Folder, etc. Define value sets for DEV / UAT / PROD

  2. Reference the library in your pipeline
    In the pipeline’s Library variables pane, add the variables you need. They become available to use in activities as dynamic content. 

  3. Bind the variables in your activities / connections
    In your Copy (or other) activities, use Add dynamic content to point the Azure SQL source (server/db) and Lakehouse sink (workspace/lakehouse path or ID) to the variables you added from the library. The official tutorial shows exactly this pattern—using a variable library to set source and destination for a Copy activity.

    Tip: you can also combine pipeline parameters with library variables for flexible expressions. 

  4. Switch values per stage with Deployment Pipelines
    In your Deployment pipeline, select the appropriate active value set (DEV/UAT/PROD) for each stage. Your pipeline will then read the right connection values without hard-coding.


Why this solves your problem

  • Variable Libraries provide environment-specific configuration centrally; Data Pipelines consume those values at run time.

  • When you deploy DEV → UAT → PROD, you don’t edit the pipeline; you just select the stage’s value set and it plugs in the right server/db/lakehouse.

 

Source : https://learn.microsoft.com/en-us/fabric/data-factory/variable-library-integration-with-data-pipelin...

 

Waiting your good news ! 

Best regards,

Antoine

View solution in original post

2 REPLIES 2
AntoineW
Impactful Individual
Impactful Individual

Hello @EduardD,

 

Good question ! Here is the Recommended approach (works with Fabric Data Pipelines + Variable Library):

  1. Create a Variable Library with per-environment values
    Examples: SqlServer, SqlDb, LakehouseId (or Lakehouse path), optional Container/Folder, etc. Define value sets for DEV / UAT / PROD

  2. Reference the library in your pipeline
    In the pipeline’s Library variables pane, add the variables you need. They become available to use in activities as dynamic content. 

  3. Bind the variables in your activities / connections
    In your Copy (or other) activities, use Add dynamic content to point the Azure SQL source (server/db) and Lakehouse sink (workspace/lakehouse path or ID) to the variables you added from the library. The official tutorial shows exactly this pattern—using a variable library to set source and destination for a Copy activity.

    Tip: you can also combine pipeline parameters with library variables for flexible expressions. 

  4. Switch values per stage with Deployment Pipelines
    In your Deployment pipeline, select the appropriate active value set (DEV/UAT/PROD) for each stage. Your pipeline will then read the right connection values without hard-coding.


Why this solves your problem

  • Variable Libraries provide environment-specific configuration centrally; Data Pipelines consume those values at run time.

  • When you deploy DEV → UAT → PROD, you don’t edit the pipeline; you just select the stage’s value set and it plugs in the right server/db/lakehouse.

 

Source : https://learn.microsoft.com/en-us/fabric/data-factory/variable-library-integration-with-data-pipelin...

 

Waiting your good news ! 

Best regards,

Antoine

@AntoineW  that is great. thank you so much. 
Can you please elaborate on this Tip please: you can also combine pipeline parameters with library variables for flexible expressions?
Side question shoud I assign Library Variable to Inside Pipeline variable and use local varaibles in my activities?  
My concern is that as a preview feature library variable can stop working any moment. So if they stop working I would need to have plan B.
Plan B will be having JSON config file in each lakehouse with all config parameters.  So I could reassign those pipeline varaibles to pick data from config file if library variables get broken

 

 

Helpful resources

Announcements
September Power BI Update Carousel

Power BI Monthly Update - September 2025

Check out the September 2025 Power BI update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors