The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
Hello community,
My team and I are currently defining our CI/CD strategy for Power BI objects within Microsoft Fabric. Here’s a quick overview of what we're trying to achieve:
Workflow
Clarifications
The Problem
When syncing from Git to the Sandbox workspace (PBIP format), Deployment Pipelines fail to move the report properly. Here's what's happening:
It is asking me to commit again, I had just connect and synchronize the WS with Git
After the commit I deployed the pipeline and it “worked” the first time.
The deployed report in Dev is broken
Immediately after the first deployment, it recognizes again that the object is not the same (?)
Not if I try to deploy it again, it fails.
Conclusion
It seems there’s a conversion mismatch between:
Hypothesis: When Git is the entry point, and the report was originally saved as PBIP, Deployment Pipelines can’t resolve the model connection properly—perhaps because we’re bypassing the requirement to move the dataset along with the report, or ensure it's already present in the target workspace.
Questions
Any advice would be greatly appreciated—and apologies for the long post! Thanks in advance.
Solved! Go to Solution.
Hi @IvanGennaro ,
Thank you for reaching out to Microsoft Fabric Community.
I recommend the following approach for implementing a reliable CI/CD flow for PBIP reports in Microsoft Fabric:
Place your Direct Lake model in a shared workspace. Reference that shared model from all report workspaces (Sandbox, Dev, Prod). This avoids broken bindings when the model doesn’t exist in downstream workspaces.
Maintain Git branches like main, dev, and prod.
Each Fabric workspace syncs to its respective branch (e.g., Dev WS <-> dev branch).
Use Git pull requests (PRs) to promote code and control approvals.
Use Git as the source for reports. sync from Git in every workspace. Deployment Pipelines can be used for model promotion or basic staging
PBIP reports should include a connections.json referencing the datasetId and workspaceId (GUIDs), not just names.
This ensures the report knows where to find the dataset, regardless of workspace context.
You can refer to the following Microsoft documentation for more details:
https://learn.microsoft.com/en-us/fabric/cicd/git-integration/manage-branches?tabs=azure-devops
https://learn.microsoft.com/en-us/fabric/cicd/best-practices-cicd
https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-git
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thank you!!
Hi @IvanGennaro ,
May I ask if the provided solution helped in resolving the issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you!!
Hi @IvanGennaro ,
Thank you for reaching out to Microsoft Fabric Community.
I recommend the following approach for implementing a reliable CI/CD flow for PBIP reports in Microsoft Fabric:
Place your Direct Lake model in a shared workspace. Reference that shared model from all report workspaces (Sandbox, Dev, Prod). This avoids broken bindings when the model doesn’t exist in downstream workspaces.
Maintain Git branches like main, dev, and prod.
Each Fabric workspace syncs to its respective branch (e.g., Dev WS <-> dev branch).
Use Git pull requests (PRs) to promote code and control approvals.
Use Git as the source for reports. sync from Git in every workspace. Deployment Pipelines can be used for model promotion or basic staging
PBIP reports should include a connections.json referencing the datasetId and workspaceId (GUIDs), not just names.
This ensures the report knows where to find the dataset, regardless of workspace context.
You can refer to the following Microsoft documentation for more details:
https://learn.microsoft.com/en-us/fabric/cicd/git-integration/manage-branches?tabs=azure-devops
https://learn.microsoft.com/en-us/fabric/cicd/best-practices-cicd
https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-git
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thank you!!
Hey! Thanks @v-sathmakuri for your suggestions! I've tested it and it worked. It is definitelly possible to use git to move the reports across Fabric and Deployment Pipelines for Models while maintaining a Direct-Lake connection.
Even though it worked, we still went for a more simplistic approach because the full-fledge git ci/cd approach was adding too much complexity for our use case. We moved to a Fabric-centric approach, using the "Publish report" method and Deployment pipelines to move the reports and models across nonprod and prod workspaces. The only part that we want to rely on pure Git is for report distribution across functional workspaces, we would have our Report-Production workspace with all our reports, but we need to distribute copies of those in functional/department workspaces for business users to consume. All of them will be pointing to the production model in report-production ws.
There is also a bit more information on my reddit post regarding this topic: https://www.reddit.com/r/MicrosoftFabric/comments/1kmhx09/issues_with_power_bi_cicd_using_pbip_forma...
This is how our final approach looks:
Hey! Thank you very much! I've tested your suggestions and it worked. It is certainly possible to move the reports through git while using deployment pipelines for the models.
Even though this worked well, we prefered going for a full fabric centric solution for Report and Model CI/CD using the "Publish" method and Deployment pipelines only while maintaining basic git integration (workspace level) for basic version control. We opted this approach because of the complexity of managing PBIP files and Git from a business user standpoint (aiming to self-service BI in the future) and managing different deployment processes for different objects from a maintainer perspective. We went for simplicity.
Disclaimer: We are still seeing some weird bugs in the Deployment rules GUI, Git syncs delays and other minor problems but it worked. We are not going full production yet so, we hope that the experience will get better sooner.
Our final CI/CD process for Power BI reports and models look something like this: