The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
We have a single Premium P1 node. I am looking at some performance issues. I have looked here, and I have looked at my capacity's usage metrics. The memory thrashing metric is defined as "how many times datasets are evicted from memory due to memory pressure from the usage of multiple datasets."
I have just three datasets in my capacity. Two are fairly large -- the .pbix files are around 500 MB, each. The third is much smaller.
I've had just a few occurrence of memory thrashing in my usage history for this week. But I don't get why! When I look at the memory usage around the times when memory thrashing occurred, I do see spikes in memory usage. But the "spikes" only go up to 15 or 16 GB, at the most -- which is not close to the 25 GB that a P1 node has. Right?
Here is the data, showing 4 periods of memory thrashing (yellow, blue, red, green) and the memory usage from the same periods:
Why do you think datasets are being evicted from/re-loaded to memory, when my maximum memory usage never seems to approach 25 GB?
One symptom, I think: during one of those periods of memory thrashing, I had a scheduled dataset refresh fail, with the following error...
You have reached the maximum allowable memory allocation for your tier.
Consider upgrading to a tier with more available memory.
The command has been canceled..
The exception was raised by the IDbCommand interface.
Thanks @GilbertQ...
My average memory usage, for the whole capacity, is 6 GB. Looking at the line chart, generally it hovers between 5 to 7 GB.
The two large datasets are refreshed every 2 hours, around the clock. At periods during which one of the large datasets is being refreshed, memory usage for the capacity typically goes up to around 7 GB. Occasionally the dataset refreshes take longer than usual, and the refreshes of the two datasets overlap; even when both datasets are being refreshed simultaneously, memory usage for the capacity typically does not exceed 10 GB.
Looking again at the memory usage data export...which shows memory usage for each three-minute period during the last week...there were only 16 "three-minute periods" (out of 3,360 periods) where memory usage exceeded 10 GB. Maybe those were periods of high report consumption, and/or data refreshes -- I'm not sure.
All of that is just to say that it seems like my datasets are never "sitting at above 12.5 GB" -- right? So again, I can't figure out why there would ever be dataset eviction/thrashing.