Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi All,
Happy New Year.
I'm planning a CI/CD strategy for Microsoft Fabric and would appreciate guidance.
Setup:
Four workspaces: Dev, Test, Pre-Prod, Live
Only dev workspace is Git-connected
Deploying Dev → Test → Pre-Prod → Live using Python automation (Fabric CI/CD Python libraries)
Scope:
Notebooks
Data Pipelines
Semantic Models
Dataflows Gen2
Deploy selected notebooks and pipelines.etc
Use service principal for authorization
Environment differences:
Each environment has different Lakehouses
Plan to use find_replace for Lakehouse IDs for notebooks
Thinking of using a single parameter file for all environment-specific values
Can we include Data Pipeline parameters in the same file or should it be a separate file?
Questions:
Is find_replace the recommended approach for multi-environment notebook deployments?
For Data Pipelines, how should SQL connection strings and other environment-specific values be updated: during deployment or post-deployment?
What is the best practice for managing connections across environments securely?
How should Lakehouse Dataflows Gen2 be updated across environments, and what are best practices?
Any advice on parameter file structure for multi-environment deployments (notebooks + pipelines)?
Any guidance or examples would be greatly appreciated.
Hi @CloudVasu
I would suggest you use Variable Libraries for environments. You can create key-value pairs for each of your environment - Dev, Test, Pre-Prod and Live. You should use 1 active value per environment.
Lifecycle Management of the Microsoft Fabric Variable library - Microsoft Fabric | Microsoft Learn
You can use variable libraries with your notebooks and data pipelines. When deploying, variable libraries are automatically referenced by these pipelines or notebooks.
Variable library integration with pipelines - Microsoft Fabric | Microsoft Learn
You can version control entire variable libraries in Git: settings, value sets, and metadata. The changes (adding/deleting variables, renaming value sets) are tracked and promoted through pull requests. This ensures full traceability and auditability when deploying across environments.
Hope this helps - please appreciate leaving a Kudos or accepting as a Solution!
For question one, I suiggest you look at dynamic replacements.
For questions two and three, my advice is to use a combination of the parameter file and variable libraries.
For question four, I tend to use notebooks instead. However, others I know tend to use parameter-driven approach from pipelines.
For five, it depends what you mean by parameter file structure.
One other thing, you migfht want to look into the config file functionality.
If you find this answer useful please give kudos and/or mark as a solution.
Note, this response was generated by an actual human being and not AI...