Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreGet certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now
Hello,
we have the following configuration:
We use dev, test, prod environments, while dev is using git integration and the others are just regular workspaces. Prod uses an app to deliver the content to the stakeholders.
We sometimes encounter the problem, that during deployment from dev to stage of a changed dataset and report, the deployment pipeline looses the binding and wants to create a new dataset and new report in stage with the same name as the existing one, while having the same logicalId in item.config.json in dev. Which results in duplicates in the stage env and (later) in the prod env. We then have to delete the old ones and integrate the new report in our app.
Unfortunately you have to believe me, that the same report already exists in the right side with the same name, I can't get them both in the same view using the scrolling.
The only change applied are a new NativeQuery powerquery source (but no new datasource) and some modifications such as added columns and measures.
Why does the deployment pipeline loose the binding between the two stages?
In addition you can see we have a yellow modification which is always there. Its a report which, wether or not its modified. always shows up as modified. It gets deployed without problems but after deployment it still shows as there is a new change. I heard this is because there are old reports v1, where the deployment pipeline fails to detect changes, could that be the case here?
Similar topic, but solution is not applicable:
Kind regards,
jan
Solved! Go to Solution.
Hello @v-tianyich-msft and everyone else visiting this thread.
I opened up a microsoft ticket, but was able to analyse the problem in depth and provide my own solution to it in the meantime:
The Problem revisited:
The problem about the deployment pipeline loosing its binding is a direct result of the workaround I am using to get around a “Alm_InvalidRequest_PurgeRequired” error (See here how I do it: Other Community Thread). As soon as the object gets "deleted" in the workspace the deployment pipeline looses the binding. This is intended behaviour. So this problem is more about the occurrence of the Alm_InvalidRequest_PurgeRequired error. This error occurs, when major changes to the data model are happening (such as new joins within existing sources on powerquery level or schema modifications). The underlying data that is residing in the workspace is no longer matching the schema present in the updated data model. Therefore, the service recommends a so called “Purge” to delete the underlying data in the workspace (similar to deleting the /.pbi/cache.abf cache in local PowerBi Desktop). Now here is where you need to improve your product: I want to be able to use the frontend of the powerbi.com service to purge the data. This is not possible as of now. It’s not even possible to do this using the powerbi REST API nor the Power BI Powershell cmdlets. It is possible ONLY using the XMLA Endpoint of the workspace.
My solution:
So I wrote a little azure devops pipeline to do exactly that.
Install-Module -Name SqlServer -force
$securePassword = ConvertTo-SecureString "$(password)" -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ("$(username)", $securePassword)
$tmsl = '{ "refresh": { "type": "clearValues", "objects": [ { "database": "" } ] } }' | ConvertFrom-Json
$($tmsl.refresh.objects)[0].database = "$(DatasetName)"
$payload = ($tmsl | ConvertTo-Json -Depth 25)
Write-Output $payload
Invoke-ASCmd -Server "$(server)" -Credential $credential -Query $payload
This will send the xmla script using invoke-ascmd.
This will then purge the underlying data in the dataset, allowing us to update the new data model using the git integration while maintaining the development pipelines bindings.
Conclusion:
The product team should work on a way to purge the dataset directly from the frontend. The way this currently works is not feasible.
Hello @v-tianyich-msft and everyone else visiting this thread.
I opened up a microsoft ticket, but was able to analyse the problem in depth and provide my own solution to it in the meantime:
The Problem revisited:
The problem about the deployment pipeline loosing its binding is a direct result of the workaround I am using to get around a “Alm_InvalidRequest_PurgeRequired” error (See here how I do it: Other Community Thread). As soon as the object gets "deleted" in the workspace the deployment pipeline looses the binding. This is intended behaviour. So this problem is more about the occurrence of the Alm_InvalidRequest_PurgeRequired error. This error occurs, when major changes to the data model are happening (such as new joins within existing sources on powerquery level or schema modifications). The underlying data that is residing in the workspace is no longer matching the schema present in the updated data model. Therefore, the service recommends a so called “Purge” to delete the underlying data in the workspace (similar to deleting the /.pbi/cache.abf cache in local PowerBi Desktop). Now here is where you need to improve your product: I want to be able to use the frontend of the powerbi.com service to purge the data. This is not possible as of now. It’s not even possible to do this using the powerbi REST API nor the Power BI Powershell cmdlets. It is possible ONLY using the XMLA Endpoint of the workspace.
My solution:
So I wrote a little azure devops pipeline to do exactly that.
Install-Module -Name SqlServer -force
$securePassword = ConvertTo-SecureString "$(password)" -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ("$(username)", $securePassword)
$tmsl = '{ "refresh": { "type": "clearValues", "objects": [ { "database": "" } ] } }' | ConvertFrom-Json
$($tmsl.refresh.objects)[0].database = "$(DatasetName)"
$payload = ($tmsl | ConvertTo-Json -Depth 25)
Write-Output $payload
Invoke-ASCmd -Server "$(server)" -Credential $credential -Query $payload
This will send the xmla script using invoke-ascmd.
This will then purge the underlying data in the dataset, allowing us to update the new data model using the git integration while maintaining the development pipelines bindings.
Conclusion:
The product team should work on a way to purge the dataset directly from the frontend. The way this currently works is not feasible.
Hi @Jannematz ,
It's strange. Other sematicl models work fine, you might need to republish.
If you are a Power BI Pro licensee, you can create a support ticket for free and a dedicated Microsoft engineer will come to solve the problem for you.
It would be great if you continue to share in this issue to help others with similar problems after you know the root cause or solution.
The link of Power BI Support: Support | Microsoft Power BI
For how to create a support ticket, please refer to How to create a support ticket in Power BI - Microsoft Power BI Community
Best regards.
Community Support Team_Scott Chang
Hi @Jannematz ,
First of all it's not duplicating it, it's replacing it. It could be that the file ID has changed. For v1 reports, it is recommended to use the latest version of desktop release for deployment.
Hope it helps!
Best regards,
Community Support Team_ Scott Chang
If this post helps then please consider Accept it as the solution to help the other members find it more quickly.
Hello @v-tianyich-msft,
Okay, sorry I got two topics mixed up here. The main problem is the duplication. See the following screenshots.
As you can see it HC Bau Performance does not have a binding. It cant find the stage dataset in dev and cant find the dev dataset in stage, therefore it would create duplicates if I would click on deploy. Why is that and how can I prevent this from happening? Where can I find the file id?
The other problem is the last paragraph in my post which starts with "In addition". But as you said it is probably a v1 report that is causing this.
Starting December 3, join live sessions with database experts and the Fabric product team to learn just how easy it is to get started.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early Bird pricing ends December 9th.
User | Count |
---|---|
37 | |
29 | |
17 | |
13 | |
8 |
User | Count |
---|---|
48 | |
39 | |
33 | |
17 | |
16 |