- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
failed barrier resultstage error when training a XGBoost model
hello
I come accross an issue when using notebook pyspark to train a XGBoost model.
Code snippet:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi @cfccai
The error you are encountering, "failed barrier resultstage error," when training an XGBoost model in PySpark , is likely caused by a combination of issues related to Spark's barrier execution mode and resource allocation. Here’s a detailed explanation:
Barrier Execution Mode Limitations:
XGBoost training in Spark uses barrier execution mode, which ensures that all tasks start simultaneously. However, this mode has strict requirements and limitations, such as the need for sufficient resources to run all tasks concurrently. If these conditions are not met, the job will fail with errors like "failed barrier resultstage" or "could not recover from a failed barrier resultstage".
Dynamic Resource Allocation:
The initial error suggests that dynamic resource allocation was enabled (spark.dynamicAllocation.enabled = true). Barrier execution mode does not support dynamic resource allocation because it requires a fixed number of resources to launch all tasks simultaneously. Disabling dynamic allocation was the correct step.
Insufficient Resources:
Even after disabling dynamic allocation, the second error indicates that there might not be enough resources (e.g., CPU cores or memory) to execute all tasks concurrently. Barrier tasks require all partitions to complete successfully, and any failure (e.g., due to insufficient resources or partition imbalance) will cause the stage to fail.
Partition Imbalance:
If some partitions are empty or unevenly distributed, it can lead to failures in barrier execution mode. This is a common issue when using XGBoost with Spark, as XGBoost automatically repartitions the data but may encounter imbalances
try adjust XGBoost Parameters
Reduce resource-intensive parameters such as max_depth or num_round. For instance:
xgb_regressor = SparkXGBRegressor(label_col="SalesOrderAmount", num_round=5, max_depth=3)
Set num_workers explicitly to control parallelism:
xgb_regressor = SparkXGBRegressor(label_col="SalesOrderAmount", num_round=10, num_workers=2)
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Anyone figured this out?
I've got exactly these 2 errors in the same order. It seems like XGBoost should run on Fabric. It also seems like Fabric doesn't support it.. Msft Fabric docs don't list xgboost models in the training guides, at least now. There's SparkML but these are simpler models in it, also as of now, but seems like SparkML will be the go-to lib. We still need a working solution, please.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI @cfccai,
It seems like you turned off the ‘dynamic allocation’ option but the existed pool resource not able to handle with current model. Have you tried to reduce the sample data amount or manually modify the environment compute setting to use more resource in these operations?
Compute management in Fabric environments - Microsoft Fabric | Microsoft Learn
Spark pool node size:
Apache Spark compute for Data Engineering and Data Science - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng
If this post helps, please consider accept as solution to help other members find it more quickly.

Helpful resources
Join us at the Microsoft Fabric Community Conference
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Fabric Monthly Update - February 2025
Check out the February 2025 Fabric update to learn about new features.

Subject | Author | Posted | |
---|---|---|---|
11-20-2024 12:15 AM | |||
10-13-2023 01:13 AM | |||
11-25-2024 09:32 AM | |||
11-07-2024 10:29 PM | |||
09-27-2024 11:33 AM |