Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Be one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now

Reply
jabate
New Member

Azure Data Bricks Data Refresh

I have a report utilizing data import from a persisted table in Databricks. Once the dataset size increased I received the following error:

Total size of serialized results of 17 tasks (4.1 GB) is bigger than spark.driver.maxResultSize

 

Looking up the error I found alot of spark specific posts explaining that spark.driver.maxResultSize is a variable which exists to prevent out of memory exceptions. The reason I'm posting in a Power BI forum is I haven't had any issue interacting with the data (either in munging the data or writing it to hive) on the Databricks side.

 

Does anybody know some details about how the refresh interacts with spark/Databricks and why it could be causing the issue in this particular situation? I would prefer having some understanding of why it's occurring in this situation before I adjust the maxResultSize variable (possibly several times).

5 REPLIES 5
v-shex-msft
Community Support
Community Support

Hi @jabate ,

 

I think this issue should more related to database settings. it sounds like response data amount is greater than default cache size so refresh requests has been blocked/canceled.

 

Maybe you can take a look at following link to know more about this issue:

Total size of serialized results of 16 tasks (1048.5 MB) is bigger than spark.driver.maxResultSize (...

Spark Configuration

 

For Power BI Architecture, you can refer to below link:

Power BI Security

 

Regards,

Xiaoxin Sheng

Community Support Team _ Xiaoxin
If this post helps, please consider accept as solution to help other members find it more quickly.

Hi Xiaoxin,

 

Thanks for looking at this issue. As an update we increased teh variable size to 35gb on both the clusters we are running, but still encounter the same 4gb error when attempting a refresh. We have a ticket in with the dev team to ascertain whether the error is being thrown by our Databricks instance (meaning we missed something in the adjustment of the variable) or whether it's occurring in the attempt to write to our premium capacity storage.

 

Julius

Hi @jabate ,

I'd like to suggest you open a support ticket to get better support from dev team, I think this issue is more related to spark itself.

submit a support ticketsubmit a support ticket

 

Regards,
Xiaoxin Sheng

Community Support Team _ Xiaoxin
If this post helps, please consider accept as solution to help other members find it more quickly.

Thanks Xiaoxin , a ticket is currently in but I have not heard back and need to follow up on it. The changes have been made in spark so I need to confirm with Microsoft support that the issue is not related to the Hive metastore which holds the uploaded PBIX files. 

 

I'll make sure to post their resolution/recommendation once I'm able to get that back on a call and get it sorted out.

pbiusrwus
Microsoft Employee
Microsoft Employee

Did you manage to figure this out? I am getting the same error.

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

ArunFabCon

Microsoft Fabric Community Conference 2025

Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.

December 2024

A Year in Review - December 2024

Find out what content was popular in the Fabric community during 2024.