The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
Hi
At the moment, we are in A4 PBI embedded.
Our data model size is around 10.4GB, leaving around 14.6GB for the model refreshing.
However, the data model refresh still failed and error messages indicated due to insufficient of memory.
We know that there is an implication rule that the unfolded size of data model will be doubled during the refresh (i.e. model size 12.5GB+ reserve for refreshing 12.5GB). However, it does not applied to our use-case scenario, and it seems to me it is impossible to understand and measure the temporary memory required for the model refresh.
Could anyone provide more insight and help for this situation?
Many thanks
Hi, @JasW
Here are a few suggestions that might help:
Best Regards,
Community Support Team _Charlotte
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi Charlotte,
Thanks for your response.
May I know what is the implication for using incremental refresh? We did try to use incremental refresh but it is not successful as the incremental refresh request the API which knowing that the new or updated data information.
We are using Azure Data factory which already part of dedicared data source as you have suggested.
I think increase the memory allocation is not part of the consideration as it mean we need to upgrade to another level of PBI embedded SKU and the cost will be double.
The issue is the unfold memory during the model refresh is not transparent and unclear for us to understand whether the A4 SKU is enough to complete the refresh. It is really costly if we keep asked for more memory to completed the refresh as we still have at least half memory availble (Our data model size is around 10.4GB, leaving around 14.6GB for the model refreshing as A4 SKU).