Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi,
I am getting below error when trying to refresh data from Spark (Azure Databricks)
Earlier the model and dataset worked fine, when we pushed some more data in Spark and I tried refreshing I am getting this error.
Any suggestion would be highly appreciated. Thanks.
ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code: '0' error message: 'org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 85 tasks (4.0 GB) is bigger than spark.driver.maxResultSize (4.0 GB)'. Table:
Solved! Go to Solution.
@SyedAli ,
Could you check if the database driver has limited the refresh data size? Please also refer to the similar case below:
https://community.powerbi.com/t5/Service/Azure-Data-Bricks-Data-Refresh/m-p/643085
Community Support Team _ Jimmy Tao
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
@SyedAli ,
Could you check if the database driver has limited the refresh data size? Please also refer to the similar case below:
https://community.powerbi.com/t5/Service/Azure-Data-Bricks-Data-Refresh/m-p/643085
Community Support Team _ Jimmy Tao
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 37 | |
| 37 | |
| 33 | |
| 32 | |
| 29 |
| User | Count |
|---|---|
| 130 | |
| 88 | |
| 82 | |
| 68 | |
| 64 |