Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
NimaiAhluwalia
Continued Contributor
Continued Contributor

DataBricks error

OLE DB or ODBC error: [DataSource.Error] ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 142 tasks (20.1 GB) is bigger than spark.driver.maxResultSize (20.0 GB)

 

Regards

1 ACCEPTED SOLUTION
v-angzheng-msft
Community Support
Community Support

Hi,   @NimaiAhluwalia 

You need to change this parameter in the cluster configuration. Go into the cluster settings, under Advanced select spark and paste spark.driver.maxResultSize 0 (for unlimited) or whatever the value suits you. Using 0 is not recommended. You should optimize the job by re partitioning.

vangzhengmsft_0-1628670051497.png

 

See the links below for more information:

https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-fails-maxresultsize-exception#solution

https://stackoverflow.com/questions/53067556/databricks-exception-total-size-of-serialized-results-i...

https://stackoverflow.com/questions/31058504/spark-1-4-increase-maxresultsize-memory

https://stackoverflow.com/questions/47996396/total-size-of-serialized-results-of-16-tasks-1048-5-mb-...

https://stackoverflow.com/questions/46763214/total-size-of-serialized-results-of-tasks-is-bigger-tha...

https://issues.apache.org/jira/browse/SPARK-12837

 

Best Regards,
Community Support Team _ Zeon Zheng
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

 

View solution in original post

1 REPLY 1
v-angzheng-msft
Community Support
Community Support

Hi,   @NimaiAhluwalia 

You need to change this parameter in the cluster configuration. Go into the cluster settings, under Advanced select spark and paste spark.driver.maxResultSize 0 (for unlimited) or whatever the value suits you. Using 0 is not recommended. You should optimize the job by re partitioning.

vangzhengmsft_0-1628670051497.png

 

See the links below for more information:

https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-fails-maxresultsize-exception#solution

https://stackoverflow.com/questions/53067556/databricks-exception-total-size-of-serialized-results-i...

https://stackoverflow.com/questions/31058504/spark-1-4-increase-maxresultsize-memory

https://stackoverflow.com/questions/47996396/total-size-of-serialized-results-of-16-tasks-1048-5-mb-...

https://stackoverflow.com/questions/46763214/total-size-of-serialized-results-of-tasks-is-bigger-tha...

https://issues.apache.org/jira/browse/SPARK-12837

 

Best Regards,
Community Support Team _ Zeon Zheng
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

 

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors
Top Kudoed Authors