Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
aaroncampeau
Frequent Visitor

Identical Models In Different Workspaces on Same Capacity Performing Much Differently

My team owns a data model used for both standard core reporting also owned by our team as well as self-service reporting created by power users within the organization. The model is managed in Tabular Editor and deployed to its own dedicated workspace; all other reports (aside from one simple report used for Row Level Security validation) are deployed to different workspaces.

About a month ago, the performance of the model in Production degraded significantly both in core and self-service reporting. This performance degredation did not occur after a deployment and happened only in our Production environement. The Production model is deployed to a workspace on a P2 capacity while our lower environments are on a P1 - prior to these performance issues our Production model was much faster than the model in Development and Test as expected, but it is now quite a bit slower.

Strangely, these issues seem related specifically to the Production workspace rather than to the entire P2 Production capacity. While troubleshooting, we ran performance analyzer on one of the slower loading visuals in our core reporting connected to four different models - our existing Production model, a newly deployed copy of the model in our existing Production workspace, our existing Development model on the P1 capacity, and a newly deployed copy of our model in a newly created workspace on the P2 Production capacity - with the following results:

  • Production - visual refreshed in 150 seconds
  • Copy in Production workspace - visual refreshed in 140 seconds
  • Development - visual refreshed in 45 seconds
  • New P2 workspace - visual refreshed in 22 seconds (similar to what we were seeing in Production before the performance degredation

If the issues were related to memory allocation we would expect to see similar refresh times across all workspaces on the P2 capacity, but are instead only seeing them in the existing Production workspace which is configured identically to the newly created test workspace - Large dataset storage format, Premium per capacity, using the same P2 capacity. In different circumstances we'd likely delete the model, re-deploy, and re-publish our core reports to see if that rectified the issue, but that's not an option for us are there are many reports aside from the core reporting that are not owned by our team connected to the model that would be deleted or break.

Has ayone else experienced a similar issue in the past? All signs point to this being some kind of issue with the workspace itself, but there's nothing we've been able to identify as a likely cause and this isn't something any of us have experienced before.            

1 ACCEPTED SOLUTION
v-zhangti
Community Support
Community Support

Hi, @aaroncampeau 

 

You can try to optimize the model.

 

Consider the optimization possibilities for a DirectQuery model. 

At the datasource layer:

  • The datasource can be optimized to ensure the fastest possible querying by pre-integrating data (which is not possible at the model layer), applying appropriate indexes, defining table partitions, materializing summarized data (with indexed views), and minimizing the amount of calculation. The best experience is achieved when pass-through queries need only filter and perform inner joins between indexed tables or views.
  • Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network bandwidth and in close proximity to the datasource.

At the model layer:

  • Power Query query designs should preferably apply no transformations - otherwise attempt to keep transformations to an absolute minimum.
  • Model query performance can be improved by configuring single direction relationships unless there is a compelling reason to allow bi-directional filtering. Also, model relationships should be configured to assume referential integrity is enforced (when this is the case) and will result in datasource queries using more efficient inner joins (instead of outer joins).
  • Avoid creating Power Query query custom columns or model calculated column - materialize these in the datasource, when possible.
  • There may be opportunity to tune DAX expressions for measures and RLS rules, perhaps rewriting logic to avoid expensive formulas.

The size of a Premium capacity determines its available memory and processor resources and limits imposed on the capacity. The number of Premium capacities is also a consideration, as creating multiple Premium capacities can help isolate workloads from each other.

 

For more information on optimization, please refer to this document. https://docs.microsoft.com/power-bi/admin/service-premium-capacity-optimize 

 

Best Regards,

Community Support Team _Charlotte

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

View solution in original post

3 REPLIES 3
v-zhangti
Community Support
Community Support

Hi, @aaroncampeau 

 

You can try to optimize the model.

 

Consider the optimization possibilities for a DirectQuery model. 

At the datasource layer:

  • The datasource can be optimized to ensure the fastest possible querying by pre-integrating data (which is not possible at the model layer), applying appropriate indexes, defining table partitions, materializing summarized data (with indexed views), and minimizing the amount of calculation. The best experience is achieved when pass-through queries need only filter and perform inner joins between indexed tables or views.
  • Ensure that gateways have enough resources, preferably on dedicated machines, with sufficient network bandwidth and in close proximity to the datasource.

At the model layer:

  • Power Query query designs should preferably apply no transformations - otherwise attempt to keep transformations to an absolute minimum.
  • Model query performance can be improved by configuring single direction relationships unless there is a compelling reason to allow bi-directional filtering. Also, model relationships should be configured to assume referential integrity is enforced (when this is the case) and will result in datasource queries using more efficient inner joins (instead of outer joins).
  • Avoid creating Power Query query custom columns or model calculated column - materialize these in the datasource, when possible.
  • There may be opportunity to tune DAX expressions for measures and RLS rules, perhaps rewriting logic to avoid expensive formulas.

The size of a Premium capacity determines its available memory and processor resources and limits imposed on the capacity. The number of Premium capacities is also a consideration, as creating multiple Premium capacities can help isolate workloads from each other.

 

For more information on optimization, please refer to this document. https://docs.microsoft.com/power-bi/admin/service-premium-capacity-optimize 

 

Best Regards,

Community Support Team _Charlotte

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

collinq
Super User
Super User

Hi @aaroncampeau ,

 

You didn't specifically call out that you are using a Dataflow but there is a known issue that started with Dataflows about a month ago roughly - Known issue - Long running, failed or stuck dataflow in Premium Gen2 - Power BI | Microsoft Docs

 

That said, yYou have a great explanation of the issue and if you already did performance testing then I think that because these are Capacities that are being controlled by Microsoft that you turn in a ticket and make sure that your assigned capacity is not having issues behind the scenes that you can't see/control.




Did I answer your question? Mark my post as a solution!

Proud to be a Datanaut!
Private message me for consulting or training needs.




These are tabular models rather than dataflows, so I don't think that known issue is likely to be related - sounds like our next move is to escalate to support. Thanks so much for your help, @collinq

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.