Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.

Reply
Jan_Mateboer
Frequent Visitor

Power BI Workspaces: Automated CI/CD via REST API / Deployment Pipelines Impossible?

I’m trying to implement a CI/CD setup for the reporting in Power BI (workspaces), but I’m hitting several blockades in the automation around updating Power BI workspaces, I hope I am missing something?

 

I will first explain what I already tried and the issues I hit. To go straight to the summarized questions, go to the bottom of this post.

 

 

For internal reasons my mainframe uses SQL/python to do the lifting/building of the data(marts) into SQL Azure databases, it is fully versioned using git, no manual actions needed in databases when the related branch is updated. Works like a charm.

 

The git also contains the workspace/Power BI Reports and this is where it gets problematic. I am currently using the REST API to complete the automation for the Power BI part.

 

Method 1:
I first had connected all the workspaces to git and thought of updating them via the branch-releases of their specific connected branch. After that I used two different approaches (After updating the workspace to its git manually):

  1. Have the different data-connectors represented in the branch itself.
  2. Have it sync to a same release-branch but update the data-connectors to the relevant source by REST API

In both cases it failed at the point where:

  1. Connections cannot be mapped to Cloud Connections via REST API, I have to manually (re-)map the connections after a dataset has been newly added (or significantly changed).
  2. It is impossible to sync the git programmatically with a Service Principal which makes it unusable for automation purposes in production (I’m not getting up at night to press ‘update workspace’ buttons after every release or to log-in manually so the updateGit script can run with my personal credentials/user-login).

 

Method 2:
Another method I tried is using (Power BI) Deployment Pipelines, but there I also hit a bunch of issues. I created them programmatically at first, but when doing it failed at other items:

  1. Using REST API, I cannot set the Deployment Rules for changes to the datasource settings, (To solve that I tried using REST API to correct it but the Mapping to the Cloud Connector is then already lost because the data-source was at some point different, and the datasets are now disconnected. What I also tried is creating pipeline via REST. Deploy stage 0->1 (breaks the mapping), change the data-source via REST to the required target source in stage 1, Deploy stage 1->2 (keeps mapping since no change in between stages. That result was even more fun, it created an extra set of reports+datasets every iteration, sooo that didn’t work also). Basically this breaks every release with a new dataset/changed.
  2. after deployment, REST API doesn’t allow selection of cloud connections, only on-premise gateways (gateways REST call returns an empty list, we work in the cloud), again, handwork after the deployment is finished.
  3. Pipeline deployment does not seem to handle deletes of reports/datasets (deprecated reports or maybe mistakes will linger forever). This would need a custom piece of script that cleans that remains/mistakes up, so might as well do everything via a script instead of pipelines?

 

 

Summarized questions:

  1. Most important: Is there any way to map a dataset/datasource to a cloud connection using a Service Principal (provided that Service Principal has permissions) or any other way that is secure and not manual. (I can find dirty workarounds for the other items)
  2. Using (Power BI or Fabric) REST API: Is there a way to update (/sync) the workspace to it’s Azure Devops git-branch
  3. Can Deployment Pipelines be set up in such a way that it will delete non-existing/deleted items

Currently I am stuck with releasing using a combination of dev in git (OK), and releasing using Deployment Pipelines, doing some shenanigans, cleaning the mess remaining by the Deployment Pipelines with a script, praying the connections don’t get reset and with every new dataset a run through all the ‘released’ workspaces at night to update the connectors by hand. This, to be honest, does not sound like healthy CI/CD to me when using Power BI Workspaces.. So I hope I am missing something!

 

1 ACCEPTED SOLUTION
Jan_Mateboer
Frequent Visitor

Hi Leroy Lu,

 

Thanks for your extended response, I appreciate it. For the links you provided, I think they are really helpful, even though they didn't solve the issue. More background info is on the bottom of this post. It took me some time to test some other options and I think I found the solution (note, the answers on the questions stated are technically still 'no' for CI/CD with Power BI Workspaces, Service Principles and/or Cloud Connections).

 

Solution:

I ended up finding a solution to work around these limitations and make my Power BI Workspaces be fully automated for releases after development, and it is integrated in a git-repository, making the whole CI/CD data-pipeline from source to reports versioned in git.

 

What I did:

  1. I stopped using the Deployment Pipelines of Power BI (Fabric), too many limitations for automating more complex scenario's, I removed them from the setup.
  2. Connected all workspaces to git-repositories with their own respective branch.
  3. Switched to a Service Account only for updating the workspace-git (*edit, SA is contributor on the workspaces*). Everything else runs only via SP's.
  4. All the updates to data-structure and/or reports are promoted via the git-branches (I have a release/promote script that deploys/syncs the branches with the required information).
  5. Nightly builds take care of the synchronization of the new release(s) with the relevant databases & workspaces where needed so releases don't interfere with office hours.

Whenever I now release a new version (that does not contain new models/datasource) it keeps the connections intact since no changes are made to the connection of the branches, even if the dataset content changes, like, added tables/columns.

 

Pro's:

  1. the whole development lane is now (good as) automated and via git
  2. less prone to human errors in setting connections updating reports
  3. highly scalable
  4. supports user-groups having different versions (in case they are not ready to migrate yet)

Con's:

  1. Introducing new Models/Datasets still need a one time action of manual mapping the connection for the respective user-groups since Cloud connections are used (gateways can be set programmatically, cloud connections cannot). This means major updates have to be done during weekends and require manual labour. It does not pose a security risk since the added datasets by default point to an (on purpose) empty database (part of the promote script).
  2. Added effort to manage cycling of the password for the service account

 

Hope this helps other people that hit the same roadblock(s). I can tag this as an answer if needed.

 

 

 

Other info:

These are the references I used and Leroy pointed at:

Power BI REST APIs for embedded analytics and automation - Power BI REST API | Microsoft Learn (the base for REST API's)

CI/CD workflow options in Fabric - Microsoft Fabric | Microsoft Learn (also a great read but doesn't solve the issue due to limitations on git-updates being restriced to actual users, it is a Fabric (UI) 'shell' that basically calls the same API's with therefore the same limitations)

 

Another thing I tried was using the Deployment Pipelines of Power BI (since git-sync could not be automated with an SP) and keep them alive (instead of rebuilding them) so I can update them with the right connections after the building of a new target workspace, but apparently you also cannot use the same workspace as a source for different pipelines in the Deployment Pipelines, so that didn't work. You can of course 'chain' all the workspaces of the users on the same pipeline of that specific version needed, but that becomes pretty messy if you have to switch workspaces to a different version because it will/can create gaps in the pipeline.

 

 

 

 

View solution in original post

4 REPLIES 4
datansh-shashan
New Member

To address the issues mentioned, here’s a solution:

  1. Cloud Connection Mapping via REST API
    Instead of manually re-mapping connections after adding or modifying a dataset, you can use the following REST API to automate the mapping process:

    API Endpoint:

    Payload Example:

     
    { "datasourceObjectIds": [ "dc2f2dac-e5e2-4c37-af76-2a0bc10f16cb",
    "3bfe5d33-ab7d-4d24-b0b5-e2bb8eb01cf5" ] }  // ConnectionId of your cloud datasource 

    Implementation Tip:

    • When creating the data source for the first time, note its connection ID. This will be required for mapping later using the API.

Hi Datansh,

 

As mentioned in the original post, Cloud Connections are not supported to map via the REST API:

"REST API doesn’t allow selection of cloud connections, only on-premise gateways (gateways REST call returns an empty list, we work in the cloud)...". As far as I know this issue was not fixed/changed, but I must admint I didn't check because the whole system is basically running like a train.

Jan_Mateboer
Frequent Visitor

Hi Leroy Lu,

 

Thanks for your extended response, I appreciate it. For the links you provided, I think they are really helpful, even though they didn't solve the issue. More background info is on the bottom of this post. It took me some time to test some other options and I think I found the solution (note, the answers on the questions stated are technically still 'no' for CI/CD with Power BI Workspaces, Service Principles and/or Cloud Connections).

 

Solution:

I ended up finding a solution to work around these limitations and make my Power BI Workspaces be fully automated for releases after development, and it is integrated in a git-repository, making the whole CI/CD data-pipeline from source to reports versioned in git.

 

What I did:

  1. I stopped using the Deployment Pipelines of Power BI (Fabric), too many limitations for automating more complex scenario's, I removed them from the setup.
  2. Connected all workspaces to git-repositories with their own respective branch.
  3. Switched to a Service Account only for updating the workspace-git (*edit, SA is contributor on the workspaces*). Everything else runs only via SP's.
  4. All the updates to data-structure and/or reports are promoted via the git-branches (I have a release/promote script that deploys/syncs the branches with the required information).
  5. Nightly builds take care of the synchronization of the new release(s) with the relevant databases & workspaces where needed so releases don't interfere with office hours.

Whenever I now release a new version (that does not contain new models/datasource) it keeps the connections intact since no changes are made to the connection of the branches, even if the dataset content changes, like, added tables/columns.

 

Pro's:

  1. the whole development lane is now (good as) automated and via git
  2. less prone to human errors in setting connections updating reports
  3. highly scalable
  4. supports user-groups having different versions (in case they are not ready to migrate yet)

Con's:

  1. Introducing new Models/Datasets still need a one time action of manual mapping the connection for the respective user-groups since Cloud connections are used (gateways can be set programmatically, cloud connections cannot). This means major updates have to be done during weekends and require manual labour. It does not pose a security risk since the added datasets by default point to an (on purpose) empty database (part of the promote script).
  2. Added effort to manage cycling of the password for the service account

 

Hope this helps other people that hit the same roadblock(s). I can tag this as an answer if needed.

 

 

 

Other info:

These are the references I used and Leroy pointed at:

Power BI REST APIs for embedded analytics and automation - Power BI REST API | Microsoft Learn (the base for REST API's)

CI/CD workflow options in Fabric - Microsoft Fabric | Microsoft Learn (also a great read but doesn't solve the issue due to limitations on git-updates being restriced to actual users, it is a Fabric (UI) 'shell' that basically calls the same API's with therefore the same limitations)

 

Another thing I tried was using the Deployment Pipelines of Power BI (since git-sync could not be automated with an SP) and keep them alive (instead of rebuilding them) so I can update them with the right connections after the building of a new target workspace, but apparently you also cannot use the same workspace as a source for different pipelines in the Deployment Pipelines, so that didn't work. You can of course 'chain' all the workspaces of the users on the same pipeline of that specific version needed, but that becomes pretty messy if you have to switch workspaces to a different version because it will/can create gaps in the pipeline.

 

 

 

 

Anonymous
Not applicable

Hi, @Jan_Mateboer 

Thank you for sharing your trial-and-error experiences. Your insights are invaluable to community members facing similar issues, helping them find feasible solutions more quickly.

 

Let’s address your questions step by step:

1.Firstly, there is currently no API that directly supports cloud gateway connections. Below is a screenshot from the relevant documentation:

vlinyulumsft_0-1727162839751.png

 

For detailed information, please refer to the following link:

Gateways - Create Datasource - REST API (Power BI Power BI REST APIs) | Microsoft Learn
 

2.Secondly, regarding the automatic management of workspace synchronization, I also think of the pipeline deployment method, which aligns with your second approach. However, as you mentioned, it still has several limitations.

Below is a screenshot from the official documentation:

vlinyulumsft_1-1727162839752.png

 

For detailed information, please refer to:

The Microsoft Fabric deployment pipelines process - Microsoft Fabric | Microsoft Learn

Here are the API documentation links related to pipelines, which I hope will be helpful to you:

Pipelines - REST API (Power BI Power BI REST APIs) | Microsoft Learn

 

3.Lastly, Power BI Deployment Pipelines do not inherently support the automatic deletion of non-existent or deleted items. You might consider integrating pipeline deployment with Azure DevOps.

Here are some relevant documents that I hope will be helpful to you:

Azure DevOps build pipeline integration with Power BI Desktop projects - Power BI | Microsoft Learn


Of course, if you have any new ideas, you are welcome to contact us.
 

Best Regards,

Leroy Lu

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.