Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Compete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.

Reply
IvanGennaro
Regular Visitor

Issues with Power BI CI/CD using PBIP format, Direct Lake, Git Integration, and Deployment Pipelines

Hello community,

My team and I are currently defining our CI/CD strategy for Power BI objects within Microsoft Fabric. Here’s a quick overview of what we're trying to achieve:

Workflow

  1. Report Development
    Developers create reports in Power BI Desktop, connecting them to Semantic Models in our Sandbox environment via Direct Lake.
  2. Version Control
    Reports are saved in PBIP format and pushed to an Azure Git Repo connected to the Sandbox workspace using Fabric Git Integration. We want to track report changes directly in Git.
  3. Git as the Source of Truth
    Instead of using "Publish to Workspace," we rely on Git synchronization as the entry point. Fabric correctly interprets the PBIP structure and reflects it as a report object.
  4. Deployment
    We use Deployment Pipelines to move reports across environments: Sandbox → Dev → Prod.

Clarifications

  • Reports and Semantic Models are treated as separate objects with different versioning workflows.
  • I'm focusing only on report versioning in this post.
  • I’m aware that you can't deploy a report without its associated model unless the same model already exists in the target workspace. I believe this is the root issue—more on that in the conclusion.

The Problem

When syncing from Git to the Sandbox workspace (PBIP format), Deployment Pipelines fail to move the report properly. Here's what's happening:

  1. After syncing the Sandbox workspace with the Git repo, I try to deploy to Dev.
  2. Once deployed to Dev, the report appears uncommitted again. I assume this is because Fabric converts PBIP into its internal .PBIR format, triggering a state mismatch.
  3. After manually committing in Dev, the report is technically there—but it's broken (e.g., doesn't render or can't connect to the model).
  4. Further redeployments fail, and if I try to re-deploy from Sandbox again, it still doesn’t work—even though the files are present in both environments.
  5. This cycle continues, requiring manual commits and still resulting in broken or unusable reports.

It is asking me to commit again, I had just connect and synchronize the WS with GitIt is asking me to commit again, I had just connect and synchronize the WS with GitAfter the commit I deployed the pipeline and it “worked” the first time.After the commit I deployed the pipeline and it “worked” the first time.The deployed report in Dev is brokenThe deployed report in Dev is brokenImmediately after the first deployment, it recognizes again that the object is not the same (?)Immediately after the first deployment, it recognizes again that the object is not the same (?)Not if I try to deploy it again, it fails.Not if I try to deploy it again, it fails.

Conclusion

It seems there’s a conversion mismatch between:

  • The developer-created PBIX and then PBIP folder structure
  • The Fabric-native Power BI object format (.PBIR and folder structure)
  • And the Deployment Pipeline requirements (especially related to model connectivity)

Hypothesis: When Git is the entry point, and the report was originally saved as PBIP, Deployment Pipelines can’t resolve the model connection properly—perhaps because we’re bypassing the requirement to move the dataset along with the report, or ensure it's already present in the target workspace.

Questions

  • Am I missing something?
  • Is there a better approach for using PBIP, Git Integration, and Deployment Pipelines together in Fabric?
  • Has anyone found a reliable CI/CD flow for reports with Direct Lake and PBIP?

Any advice would be greatly appreciated—and apologies for the long post! Thanks in advance.

1 ACCEPTED SOLUTION
v-sathmakuri
Community Support
Community Support

Hi @IvanGennaro ,

 

Thank you for reaching out to Microsoft Fabric Community.

 

I recommend the following approach for implementing a reliable CI/CD flow for PBIP reports in Microsoft Fabric: 

 

Place your Direct Lake model in a shared workspace. Reference that shared model from all report workspaces (Sandbox, Dev, Prod). This avoids broken bindings when the model doesn’t exist in downstream workspaces.


Maintain Git branches like main, dev, and prod.
Each Fabric workspace syncs to its respective branch (e.g., Dev WS <-> dev branch).
Use Git pull requests (PRs) to promote code and control approvals.


Use Git as the source for reports. sync from Git in every workspace. Deployment Pipelines can be used for model promotion or basic staging

 

PBIP reports should include a connections.json referencing the datasetId and workspaceId (GUIDs), not just names.
This ensures the report knows where to find the dataset, regardless of workspace context.

 

You can refer to the following Microsoft documentation for more details:

 

https://learn.microsoft.com/en-us/fabric/cicd/git-integration/manage-branches?tabs=azure-devops 

https://learn.microsoft.com/en-us/fabric/cicd/best-practices-cicd 

https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-git 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thank you!!

View solution in original post

4 REPLIES 4
v-sathmakuri
Community Support
Community Support

Hi @IvanGennaro ,

 

May I ask if the provided solution helped in resolving the issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.

 

Thank you!!

v-sathmakuri
Community Support
Community Support

Hi @IvanGennaro ,

 

Thank you for reaching out to Microsoft Fabric Community.

 

I recommend the following approach for implementing a reliable CI/CD flow for PBIP reports in Microsoft Fabric: 

 

Place your Direct Lake model in a shared workspace. Reference that shared model from all report workspaces (Sandbox, Dev, Prod). This avoids broken bindings when the model doesn’t exist in downstream workspaces.


Maintain Git branches like main, dev, and prod.
Each Fabric workspace syncs to its respective branch (e.g., Dev WS <-> dev branch).
Use Git pull requests (PRs) to promote code and control approvals.


Use Git as the source for reports. sync from Git in every workspace. Deployment Pipelines can be used for model promotion or basic staging

 

PBIP reports should include a connections.json referencing the datasetId and workspaceId (GUIDs), not just names.
This ensures the report knows where to find the dataset, regardless of workspace context.

 

You can refer to the following Microsoft documentation for more details:

 

https://learn.microsoft.com/en-us/fabric/cicd/git-integration/manage-branches?tabs=azure-devops 

https://learn.microsoft.com/en-us/fabric/cicd/best-practices-cicd 

https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-git 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thank you!!

Hey! Thanks @v-sathmakuri  for your suggestions! I've tested it and it worked. It is definitelly possible to use git to move the reports across Fabric and Deployment Pipelines for Models while maintaining a Direct-Lake connection.

 

Even though it worked, we still went for a more simplistic approach because the full-fledge git ci/cd approach was adding too much complexity for our use case. We moved to a Fabric-centric approach, using the "Publish report" method and Deployment pipelines to move the reports and models across nonprod and prod workspaces. The only part that we want to rely on pure Git is for report distribution across functional workspaces, we would have our Report-Production workspace with all our reports, but we need to distribute copies of those in functional/department workspaces for business users to consume. All of them will be pointing to the production model in report-production ws.

There is also a bit more information on my reddit post regarding this topic: https://www.reddit.com/r/MicrosoftFabric/comments/1kmhx09/issues_with_power_bi_cicd_using_pbip_forma...

 

This is how our final approach looks:

CI-CD Final approach.png

Hey! Thank you very much! I've tested your suggestions and it worked. It is certainly possible to move the reports through git while using deployment pipelines for the models.

 

Even though this worked well, we prefered going for a full fabric centric solution for Report and Model CI/CD using the "Publish" method and Deployment pipelines only while maintaining basic git integration (workspace level) for basic version control. We opted this approach because of the complexity of managing PBIP files and Git from a business user standpoint (aiming to self-service BI in the future) and managing different deployment processes for different objects from a maintainer perspective. We went for simplicity. 

 

Disclaimer: We are still seeing some weird bugs in the Deployment rules GUI, Git syncs delays and other minor problems but it worked. We are not going full production yet so, we hope that the experience will get better sooner.

 

Our final CI/CD process for Power BI reports and models look something like this:

CI-CD Final approach.png

Helpful resources

Announcements
August Power BI Update Carousel

Power BI Monthly Update - August 2025

Check out the August 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.