Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
Hi All,
Happy New Year.
I'm planning a CI/CD strategy for Microsoft Fabric and would appreciate guidance.
Setup:
Four workspaces: Dev, Test, Pre-Prod, Live
Only dev workspace is Git-connected
Deploying Dev → Test → Pre-Prod → Live using Python automation (Fabric CI/CD Python libraries)
Scope:
Notebooks
Data Pipelines
Semantic Models
Dataflows Gen2
Deploy selected notebooks and pipelines.etc
Use service principal for authorization
Environment differences:
Each environment has different Lakehouses
Plan to use find_replace for Lakehouse IDs for notebooks
Thinking of using a single parameter file for all environment-specific values
Can we include Data Pipeline parameters in the same file or should it be a separate file?
Questions:
Is find_replace the recommended approach for multi-environment notebook deployments?
For Data Pipelines, how should SQL connection strings and other environment-specific values be updated: during deployment or post-deployment?
What is the best practice for managing connections across environments securely?
How should Lakehouse Dataflows Gen2 be updated across environments, and what are best practices?
Any advice on parameter file structure for multi-environment deployments (notebooks + pipelines)?
Any guidance or examples would be greatly appreciated.
Solved! Go to Solution.
For question one, I suiggest you look at dynamic replacements.
For questions two and three, my advice is to use a combination of the parameter file and variable libraries.
For question four, I tend to use notebooks instead. However, others I know tend to use parameter-driven approach from pipelines.
For five, it depends what you mean by parameter file structure.
One other thing, you migfht want to look into the config file functionality.
If you find this answer useful please give kudos and/or mark as a solution.
Note, this response was generated by an actual human being and not AI...
@CloudVasu The responses given by the other folks are exactly on point so I won't repeat them.
It seems you want to learn more about how parameterization is supported and its capabilities in the fabric-cicd Python library. You can find the documentation here.
You can read all about the supported parameterization features, see examples of parameter file setup, and view real parameterization use cases by item type for deployment. If you have any specific questions related to fabric-cicd I would recommend raising a GitHub issue here.
Hope this helps!
Hi @CloudVasu ,
Thank you for reaching out to the Microsoft Community Forum.
Hi @deborshi_nag and @KevinChant , Thank you for your prompt responses.
Hi @CloudVasu Could you please try the proposed solutions shared by @deborshi_nag and @KevinChant ? Let us know if you’re still facing the same issue we’ll be happy to assist you further.
Regards,
Dinesh
Hi @CloudVasu ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.
Regards,
Dinesh
Hi @CloudVasu
I would suggest you use Variable Libraries for environments. You can create key-value pairs for each of your environment - Dev, Test, Pre-Prod and Live. You should use 1 active value per environment.
Lifecycle Management of the Microsoft Fabric Variable library - Microsoft Fabric | Microsoft Learn
You can use variable libraries with your notebooks and data pipelines. When deploying, variable libraries are automatically referenced by these pipelines or notebooks.
Variable library integration with pipelines - Microsoft Fabric | Microsoft Learn
You can version control entire variable libraries in Git: settings, value sets, and metadata. The changes (adding/deleting variables, renaming value sets) are tracked and promoted through pull requests. This ensures full traceability and auditability when deploying across environments.
Hope this helps - please appreciate leaving a Kudos or accepting as a Solution!
For question one, I suiggest you look at dynamic replacements.
For questions two and three, my advice is to use a combination of the parameter file and variable libraries.
For question four, I tend to use notebooks instead. However, others I know tend to use parameter-driven approach from pipelines.
For five, it depends what you mean by parameter file structure.
One other thing, you migfht want to look into the config file functionality.
If you find this answer useful please give kudos and/or mark as a solution.
Note, this response was generated by an actual human being and not AI...