Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for a $400 discount.
Register nowGet inspired! Check out the entries from the Power BI DataViz World Championships preliminary rounds and give kudos to your favorites. View the vizzies.
I periodically encounter memory errors, when working with datasets that are far smaller than the limit which is advertised for F64 (ie. ~25gb per model).
For example, today we got the error "The operation was throttled by Power BI because of insufficient memory. Please try again later" . This was during an import/refresh via tabular TOM.
RootActivityId: 6f509224-0581-4734-aeb2-7c90d0f18671
Date (UTC): 2/17/2025 12:34:10 PM
The model in question uses the so-called large semantic model format .
According to DAX studio, the model in question is only about ~6GB.
Thankfully these errors are relatively rare. But I think it shows that the so-called "reserved capacity" is not truly reserved, and our RAM may or may not be available in the PBI platform at the moment when it is needed.
Is there a well-documented SLA that permits the PBI platform to generate these random memory/throttling errors (even in a "reserved capacity")? It is frustrating that we should be dealing with noisy-neighbor issues, even when hosting on a large "F" sku like F64.
Solved! Go to Solution.
Each capacity only offer half of memory for refresh, i.e. with F64 you will be able to refreh a semantic model of size ~12.5 GB
Also The Max memory (GB) (F64 = 25 GB) represents an upper bound for the semantic model size. However, an amount of memory must be reserved for operations such as refreshes and queries on the semantic model. The maximum semantic model size permitted on a capacity might be smaller than the numbers in this column.
Reference https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-what-is#semantic-model-sku-lim...
Do check the size on Power BI Web, this will give you actual size of your model
Workspace -> Settings -> Manage group storage
To mitigate this issue, would suggest to implement incremental refresh
Hi @dbeavon3,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @arvindsingh802 for addressing the issue.
In addition to arvindsingh points, as per my understanding here the issue is likely caused by temporary memory spikes during dataset refresh, not a strict dataset size limit. During operations like dataset refreshes, the memory usage can temporarily exceed the size of the dataset due to the way power bi handles data processing.
Even if your dataset is approximately 6GB, the refresh process can cause memory usage to spike, potentially exceeding available memory if other operations are also consuming resources. Multiple datasets refreshing simultaneously can consume significant memory.
Regarding SLA, microsoft does not explicitly document an SLA that guarantees memory availability for every operation, even in reserved capacities. While F64 provides dedicated resources, memory spikes, may lead to occasional throttling.
If these errors persist consider scaling up to a higher SKU if memory limits are frequently reached.
For more details, please refer to below documents:
https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-what-is
If this post helps, then please consider Accepting as the solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thanks and regards,
Anjan Kumar Chippa
I had the exact same issue today (my model was also 6 GB). This was on a P2 capacity. The suspected root cause was that Query scale-out was enabled on the model, and the model errors in Log Analytics stated that:
Request for database '...' was routed to wrong node by the Power BI request router. This is usually caused by intermittent issues. Please try again. Details: '{ 'resourceMoniker': '...', 'replicaType': 'ReadWrite' }'
Unclear why this occurs, but disabling Query scale-out mitigated the refresh error.
This is the KQL run in Log A:
PowerBIDatasetsWorkspace
| where Level contains "Error"
| where EventText contains "replica"
| project TimeGenerated, EventText
Hi @dbeavon3,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @arvindsingh802 for addressing the issue.
In addition to arvindsingh points, as per my understanding here the issue is likely caused by temporary memory spikes during dataset refresh, not a strict dataset size limit. During operations like dataset refreshes, the memory usage can temporarily exceed the size of the dataset due to the way power bi handles data processing.
Even if your dataset is approximately 6GB, the refresh process can cause memory usage to spike, potentially exceeding available memory if other operations are also consuming resources. Multiple datasets refreshing simultaneously can consume significant memory.
Regarding SLA, microsoft does not explicitly document an SLA that guarantees memory availability for every operation, even in reserved capacities. While F64 provides dedicated resources, memory spikes, may lead to occasional throttling.
If these errors persist consider scaling up to a higher SKU if memory limits are frequently reached.
For more details, please refer to below documents:
https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-what-is
If this post helps, then please consider Accepting as the solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thanks and regards,
Anjan Kumar Chippa
Each capacity only offer half of memory for refresh, i.e. with F64 you will be able to refreh a semantic model of size ~12.5 GB
Also The Max memory (GB) (F64 = 25 GB) represents an upper bound for the semantic model size. However, an amount of memory must be reserved for operations such as refreshes and queries on the semantic model. The maximum semantic model size permitted on a capacity might be smaller than the numbers in this column.
Reference https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-what-is#semantic-model-sku-lim...
Do check the size on Power BI Web, this will give you actual size of your model
Workspace -> Settings -> Manage group storage
To mitigate this issue, would suggest to implement incremental refresh
Hi @arvindsingh802
We already limit the size of refresh operations. We only refresh one table at a time. The largest table is under 3 GB.
You seem to be putting fault on the customer for this error message ("The operation was throttled by Power BI because of insufficient memory. Please try again later")
... however the error message does NOT give us any actionable details that would put the blame on our own custom workloads. We know virtually nothing about the memory constraints that caused this error. I'm sure Microsoft would be quick to tell us the details, if we were creating a "self-inflicted" problem. Eg. If the issue was triggered within the scope of one single dataset, then the error should tell me how much RAM (out of 25 GB) that the dataset was using at that moment. Customers actually want to know when they are causing self-inflicted problems, because those are the problems we can actually fix, and avoid recurrences in the future.
However, in this case I'm pretty certain that the PG team does NOT want to share the reason for the throttling error, because it is probably caused undocumented constraints, or caused by other customers, or caused by a bug in the service.
Please let me know if there are any tools for testing your theory that the RAM usage, in a single dataset, had reached anywhere near the 25 GB limit. It seems extremely unlikely to me, given that this dataset normally occupies only ~6 GB. And it is a fairly new and inactive dataset at this point of time (although that may change in the future). Here are the details you had referred to:
Any help would be appreciated. I am happy to theorize about the source of these errors, as long as there is a way to confirm or deny the theories. I find it very difficult to manage PBI. There is very little logging or telemetry that is made available to customers, although the support engineers will tell us that Microsoft has lots of that information available to them. In my opinion, customers should not be forced to open dozens of PBI support cases a year to make sense of the environment. There should not be an endless stream of error messages generated by this platform which are totally meaningless and non-actionable.
Hi @dbeavon3,
Thank you for providing more details, your concerns about transparency and actionable insights are completely understandable.
Even though your dataset is approximately 6GB and the largest table is under 3GB, memory usage can fluctuate due to the temporary memory spikes and concurrency limits can impact memory availability, like even if only one table is refreshing at a time, background power bi service processes may still consume resources. Additionally other datasets within the same capacity can contribute to memory contention, that the dataset itself may not reach 25GB limit, concurrent workloads running on the same capacity can affect overall performance.
To analyze the Memory usage and determine whether RAM usage reached capacity, the best tool is Fabric Metrics App, this provides real-time insights into capacity usage
And also would you be able to check the Power BI Capacity Metrics App to see if memory pressure was recorded at the time of the error? If we can gather that data, we may have more clarity on whether this is a true memory constraint issue or an external limitation.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thanks and regards,
Anjan Kumar Chippa
Hi @dbeavon3,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution I have provided for the issue worked.
If my response addressed, please mark it as "Accept as solution" and click "Yes" if you found it helpful.
Thanks and regards,
Anjan Kumar Chippa
Hi @dbeavon3,
We wanted to kindly follow up to check if the solution I have provided for the issue worked.
If my response addressed, please mark it as "Accept as solution" and click "Yes" if you found it helpful.
Thanks and regards,
Anjan Kumar Chippa
Hi @dbeavon3,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution I have provided for the issue worked.
If my response addressed, please mark it as "Accept as solution" and click "Yes" if you found it helpful.
Thanks and regards,
Anjan Kumar Chippa
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code FABINSIDER for a $400 discount!
Check out the February 2025 Power BI update to learn about new features.
User | Count |
---|---|
60 | |
34 | |
30 | |
28 | |
27 |
User | Count |
---|---|
52 | |
51 | |
38 | |
15 | |
12 |