Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.
Hi everyone, 👋 I’m working with three Power BI workspaces: DEV, UAT, and PROD, each hosting reports and datasets. To handle large volumes of data more efficiently, I’ve started using DirectLake with Lakehouse.
My plan is to build a data pipeline that ingests data from Azure SQL Database into a Lakehouse. Each environment (SIT, UAT, PROD) will have its own Azure SQL DB and Data Lake.
Challenge: When deploying the pipeline from SIT to UAT to PROD, I’m unable to dynamically substitute the connection strings or data source parameters. The pipeline doesn’t seem to support environment-specific configurations out of the box.
Question: What’s the recommended approach to ensure that each pipeline deployment uses its own set of parameters (e.g., Azure SQL DB connection string, Workspace, Lakehouse path, etc.)? Is there a best practice for managing environment-specific settings during deployment?
Thanks in advance for your insights!
Solved! Go to Solution.
Hello @EduardD,
Good question ! Here is the Recommended approach (works with Fabric Data Pipelines + Variable Library):
Create a Variable Library with per-environment values
Examples: SqlServer, SqlDb, LakehouseId (or Lakehouse path), optional Container/Folder, etc. Define value sets for DEV / UAT / PROD.
Reference the library in your pipeline
In the pipeline’s Library variables pane, add the variables you need. They become available to use in activities as dynamic content.
Bind the variables in your activities / connections
In your Copy (or other) activities, use Add dynamic content to point the Azure SQL source (server/db) and Lakehouse sink (workspace/lakehouse path or ID) to the variables you added from the library. The official tutorial shows exactly this pattern—using a variable library to set source and destination for a Copy activity.
Tip: you can also combine pipeline parameters with library variables for flexible expressions.
Switch values per stage with Deployment Pipelines
In your Deployment pipeline, select the appropriate active value set (DEV/UAT/PROD) for each stage. Your pipeline will then read the right connection values without hard-coding.
Variable Libraries provide environment-specific configuration centrally; Data Pipelines consume those values at run time.
When you deploy DEV → UAT → PROD, you don’t edit the pipeline; you just select the stage’s value set and it plugs in the right server/db/lakehouse.
Waiting your good news !
Best regards,
Antoine
Hello @EduardD,
Good question ! Here is the Recommended approach (works with Fabric Data Pipelines + Variable Library):
Create a Variable Library with per-environment values
Examples: SqlServer, SqlDb, LakehouseId (or Lakehouse path), optional Container/Folder, etc. Define value sets for DEV / UAT / PROD.
Reference the library in your pipeline
In the pipeline’s Library variables pane, add the variables you need. They become available to use in activities as dynamic content.
Bind the variables in your activities / connections
In your Copy (or other) activities, use Add dynamic content to point the Azure SQL source (server/db) and Lakehouse sink (workspace/lakehouse path or ID) to the variables you added from the library. The official tutorial shows exactly this pattern—using a variable library to set source and destination for a Copy activity.
Tip: you can also combine pipeline parameters with library variables for flexible expressions.
Switch values per stage with Deployment Pipelines
In your Deployment pipeline, select the appropriate active value set (DEV/UAT/PROD) for each stage. Your pipeline will then read the right connection values without hard-coding.
Variable Libraries provide environment-specific configuration centrally; Data Pipelines consume those values at run time.
When you deploy DEV → UAT → PROD, you don’t edit the pipeline; you just select the stage’s value set and it plugs in the right server/db/lakehouse.
Waiting your good news !
Best regards,
Antoine
@AntoineW that is great. thank you so much.
Can you please elaborate on this Tip please: you can also combine pipeline parameters with library variables for flexible expressions?
Side question shoud I assign Library Variable to Inside Pipeline variable and use local varaibles in my activities?
My concern is that as a preview feature library variable can stop working any moment. So if they stop working I would need to have plan B.
Plan B will be having JSON config file in each lakehouse with all config parameters. So I could reassign those pipeline varaibles to pick data from config file if library variables get broken