Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
wojciech
Helper II
Helper II

Total Memory Used for Semantic Model Refresh

Hi,

 

I have workspace monitoring enabled to capture stats about semantic model refresh. Having studied SemanticModelLogs table in Monitoring KQL Databse I have produced this query:

SemanticModelLogs
| where ItemId == "00000-0000-0000-83db-0000000000000000"
      and OperationName == "ProgressReportEnd"
      and OperationDetailName == "TabularRefresh"
| extend MashupPeakMemoryStr = extract(@"MashupPeakMemory:\s*([0-9]+)", 1, EventText)
| where isnotempty(MashupPeakMemoryStr)
| extend MashupPeakMemoryKB = tolong(MashupPeakMemoryStr)
| extend MashupPeakMemoryGB = round(MashupPeakMemoryKB / 1024.0 / 1024.0, 2)
| summarize TotalMemoryGB = round(sum(MashupPeakMemoryGB), 2),
            MaxMemoryGB = max(MashupPeakMemoryGB),
            AvgMemoryGB = round(avg(MashupPeakMemoryGB), 2),
            RecordCount = count()
It produces this output:
TotalMemoryGB,       MaxMemoryGB,       AvgMemoryGB,         RecordCount
23.14                         0.44                          0.24                            94
I have 94 tables in the model (I know, but this is not my model), so its doing something right. However, it does refresh OK on F32 so 23.14GB total memory does not make much sense. I want to fit it into F16 and trying to uderstand how much memory this refresh needs to succesfully complete. Is this even possible to calculate... . I would add that the model before refresh takes 2.7GB
Thank you,

WJ
1 ACCEPTED SOLUTION
Zanqueta
Solution Sage
Solution Sage

Hello @wojciech,

 

Your analysis is correct: the value of 23.14 GB shown as TotalMemoryGB does not represent the actual memory required for the entire refresh. It is simply the sum of the peak memory usage for each individual table. These peaks do not occur simultaneously, so this figure is not a reliable indicator of the real memory footprint during refresh.

Why does this happen?

  • Each table has its own mashup process, and MashupPeakMemory reflects the peak during that table’s transformation.
  • The refresh process is sequential (or partially parallel), but it never consumes the sum of all peaks at once.
  • What truly matters is the maximum concurrent memory usage during the refresh, which is not directly available in the log.

How can you estimate the required memory?

There is no exact formula, but you can use the following approaches:

1. Use MaxMemoryGB as a reference

Your MaxMemoryGB = 0.44 GB indicates that the heaviest table consumes approximately 440 MB during mashup. However, this only covers the mashup phase and does not include:
  • Model compression.
  • Internal processing by Analysis Services.

2. Consider the model size plus overhead

A practical rule for Power BI Premium:
  • Required memory ≈ (Model size × 2 to 3)
    This accounts for:
    • The loaded model.
    • Temporary structures during refresh.
In your case:
  • Model size = 2.7 GB
  • Estimated requirement = around 8 GB (including overhead).
This explains why it works on F32 (32 GB) and should also work on F16 (16 GB), provided there are no other models competing for resources.

 

If this response was helpful in any way, I’d gladly accept a 👍much like the joy of seeing a DAX measure work first time without needing another FILTER.

Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop 🌀.

View solution in original post

4 REPLIES 4
AntoineW
Memorable Member
Memorable Member

Hi @wojciech,

 

The conclusion “Total = 23.14 GB” is misleading, and here’s why : 

Your query sums the mashup memory of all tables but a semantic model refresh does not load all tables at once.

 

Power BI / Fabric refreshes tables sequentially, unless parallelism is explicitly enabled.

➡️ The only meaningful metric for capacity sizing is MaxMemoryGB, because the refresh capacity needs to support the largest Mashup step, not the sum.

 

So in your output:

  • MaxMemoryGB = 0.44 GB

  • Avg ~0.24 GB per table

  • 94 tables → sum = ~23 GB (but irrelevant)

You should ignore the total.

 

To estimate the right Fabric capacity size, it’s not as straightforward as it may seem. Capacity sizing depends heavily on your workloads, data volume, refresh frequency, and concurrency — and this can only be assessed properly through a Proof of Concept (PoC).

During the PoC, you would typically:

  • Deploy the Capacity Metrics app to monitor usage

  • Run your real workloads (pipelines, notebooks, semantic models, reports)

  • Measure CPU, memory, concurrency, and overload events

  • Adjust accordingly before committing to any SKU

If needed, you can also work with a Microsoft partner to guide you through the sizing, governance, cost optimization, and best practices.

The good news is that you already have access to a free 60-day Fabric trial on an F64, which is more than enough to test end-to-end scenarios and evaluate what your real capacity requirements might be.

 

References : 

https://www.microsoft.com/en-us/microsoft-fabric/capacity-estimator

- https://azure.microsoft.com/en-us/pricing/details/microsoft-fabric/

 

Hope it can help you!

Best regards,

Antoine

Thank you for your reply and help!

Zanqueta
Solution Sage
Solution Sage

Hello @wojciech,

 

Your analysis is correct: the value of 23.14 GB shown as TotalMemoryGB does not represent the actual memory required for the entire refresh. It is simply the sum of the peak memory usage for each individual table. These peaks do not occur simultaneously, so this figure is not a reliable indicator of the real memory footprint during refresh.

Why does this happen?

  • Each table has its own mashup process, and MashupPeakMemory reflects the peak during that table’s transformation.
  • The refresh process is sequential (or partially parallel), but it never consumes the sum of all peaks at once.
  • What truly matters is the maximum concurrent memory usage during the refresh, which is not directly available in the log.

How can you estimate the required memory?

There is no exact formula, but you can use the following approaches:

1. Use MaxMemoryGB as a reference

Your MaxMemoryGB = 0.44 GB indicates that the heaviest table consumes approximately 440 MB during mashup. However, this only covers the mashup phase and does not include:
  • Model compression.
  • Internal processing by Analysis Services.

2. Consider the model size plus overhead

A practical rule for Power BI Premium:
  • Required memory ≈ (Model size × 2 to 3)
    This accounts for:
    • The loaded model.
    • Temporary structures during refresh.
In your case:
  • Model size = 2.7 GB
  • Estimated requirement = around 8 GB (including overhead).
This explains why it works on F32 (32 GB) and should also work on F16 (16 GB), provided there are no other models competing for resources.

 

If this response was helpful in any way, I’d gladly accept a 👍much like the joy of seeing a DAX measure work first time without needing another FILTER.

Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop 🌀.

Thank you sir

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.