Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
NimaiAhluwalia
Continued Contributor
Continued Contributor

DataBricks error

OLE DB or ODBC error: [DataSource.Error] ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 142 tasks (20.1 GB) is bigger than spark.driver.maxResultSize (20.0 GB)

 

Regards

1 ACCEPTED SOLUTION
v-angzheng-msft
Community Support
Community Support

Hi,   @NimaiAhluwalia 

You need to change this parameter in the cluster configuration. Go into the cluster settings, under Advanced select spark and paste spark.driver.maxResultSize 0 (for unlimited) or whatever the value suits you. Using 0 is not recommended. You should optimize the job by re partitioning.

vangzhengmsft_0-1628670051497.png

 

See the links below for more information:

https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-fails-maxresultsize-exception#solution

https://stackoverflow.com/questions/53067556/databricks-exception-total-size-of-serialized-results-i...

https://stackoverflow.com/questions/31058504/spark-1-4-increase-maxresultsize-memory

https://stackoverflow.com/questions/47996396/total-size-of-serialized-results-of-16-tasks-1048-5-mb-...

https://stackoverflow.com/questions/46763214/total-size-of-serialized-results-of-tasks-is-bigger-tha...

https://issues.apache.org/jira/browse/SPARK-12837

 

Best Regards,
Community Support Team _ Zeon Zheng
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

 

View solution in original post

1 REPLY 1
v-angzheng-msft
Community Support
Community Support

Hi,   @NimaiAhluwalia 

You need to change this parameter in the cluster configuration. Go into the cluster settings, under Advanced select spark and paste spark.driver.maxResultSize 0 (for unlimited) or whatever the value suits you. Using 0 is not recommended. You should optimize the job by re partitioning.

vangzhengmsft_0-1628670051497.png

 

See the links below for more information:

https://docs.microsoft.com/en-us/azure/databricks/kb/jobs/job-fails-maxresultsize-exception#solution

https://stackoverflow.com/questions/53067556/databricks-exception-total-size-of-serialized-results-i...

https://stackoverflow.com/questions/31058504/spark-1-4-increase-maxresultsize-memory

https://stackoverflow.com/questions/47996396/total-size-of-serialized-results-of-16-tasks-1048-5-mb-...

https://stackoverflow.com/questions/46763214/total-size-of-serialized-results-of-tasks-is-bigger-tha...

https://issues.apache.org/jira/browse/SPARK-12837

 

Best Regards,
Community Support Team _ Zeon Zheng
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

 

Helpful resources

Announcements
New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Power BI DataViz World Championships carousel

Power BI DataViz World Championships - June 2026

A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Power BI Update Carousel

Power BI Community Update - March 2026

Check out the March 2026 Power BI update to learn about new features.