Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hello, I'm researching the trade-offs in creating a three stage Deployment Pipeline (dev, test, and prod) vs. one with just two. The benefits of having all three seem clear, but I'm not sure how much additional manual effort that requires in deploying content from one stage (and workspace) to the next. A couple of specific questions about that, but I welcome other considerations that I may not be thinking of:
Would the connections from a desktop PBIX file to the dataflows feeding it need to be manually updated in deploying to a new workspace since I believe their ids would be different? Or do business rules, parameters, or some other aspect of the platform take care of that - or at least make it manageable for an environment with a number of dataflows.
This isn't specific to pipelines, but if a report is updated in any workspace outside of production, is it possible to maintain the existing business user bookmarks for that report in production or would they need to be reset by the users after the updated report is deployed?
Thank you.
Solved! Go to Solution.
There's auto binding of datasets to their source dataflows, to they extent they're all deployed across the deployment pipeline workspaces:
https://learn.microsoft.com/en-us/power-bi/create-reports/deployment-pipelines-process#auto-binding
Thank you.
There's auto binding of datasets to their source dataflows, to they extent they're all deployed across the deployment pipeline workspaces:
https://learn.microsoft.com/en-us/power-bi/create-reports/deployment-pipelines-process#auto-binding
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 58 | |
| 56 | |
| 35 | |
| 18 | |
| 14 |