Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I can run a for each loop to insert records to a correctly partitioned datalake table (ie the foreach variable is also the partition id) and this works fine sequentially. However if a run in parallel I get the following error:
Failed to create Livy session for executing notebook. Error: [TooManyRequestsForCapacity] Unable to submit this request because all the available capacity is currently being used. Cancel a currently running Notebook or Spark Job Definition job, increase your available capacity, or try again later
I have tried reducing the batch count and it just about works for a count of 2. It works for the first 3 for a count of 3. Anything over that it fails after the first. Are there any other settings I can change to make this work in parallel for all 9 partition ids?
Solved! Go to Solution.
Hi @coolie ,
Thanks for using Fabric Community.
The error message "[TooManyRequestsForCapacity] Unable to submit this request because all the available capacity is currently being used" indicates that your cluster is overloaded and cannot handle the parallel execution of your for-each loop. This suggests that increasing the parallelism beyond a certain point will overload the cluster and lead to failures.
Possible Solutions
Here are some solutions you can try to address the error:
1. Increase cluster capacity:
If possible, increase the resources allocated to your MS Fabric cluster. This can include adding more nodes, increasing CPU and memory limits, or using a higher tier of service. More resources will allow the cluster to handle the parallel workload more effectively.
2. Optimize for-each loop:
Reduce batch size: Instead of processing all records for a partition in a single batch, try dividing them into smaller batches and processing them sequentially. This can help distribute the workload more evenly and prevent overloading the cluster.
Use asynchronous processing: Consider using asynchronous tasks or threads to process the partitions in parallel. This can allow your code to continue executing other tasks while waiting for the partitions to be processed, potentially improving overall performance.
Use partition-aware scheduling: If your data processing framework supports partition-aware scheduling, utilize it to assign each partition to a different worker node in the cluster. This ensures that each worker node is only responsible for a single partition, reducing the chance of overloading any individual node.
Hi @coolie ,
Thanks for using Fabric Community.
The error message "[TooManyRequestsForCapacity] Unable to submit this request because all the available capacity is currently being used" indicates that your cluster is overloaded and cannot handle the parallel execution of your for-each loop. This suggests that increasing the parallelism beyond a certain point will overload the cluster and lead to failures.
Possible Solutions
Here are some solutions you can try to address the error:
1. Increase cluster capacity:
If possible, increase the resources allocated to your MS Fabric cluster. This can include adding more nodes, increasing CPU and memory limits, or using a higher tier of service. More resources will allow the cluster to handle the parallel workload more effectively.
2. Optimize for-each loop:
Reduce batch size: Instead of processing all records for a partition in a single batch, try dividing them into smaller batches and processing them sequentially. This can help distribute the workload more evenly and prevent overloading the cluster.
Use asynchronous processing: Consider using asynchronous tasks or threads to process the partitions in parallel. This can allow your code to continue executing other tasks while waiting for the partitions to be processed, potentially improving overall performance.
Use partition-aware scheduling: If your data processing framework supports partition-aware scheduling, utilize it to assign each partition to a different worker node in the cluster. This ensures that each worker node is only responsible for a single partition, reducing the chance of overloading any individual node.
Thanks thats helpful, but its more or less what I did as I am: 1) setting the batch size; 2) using notebook in the loop - which I assume constitutes asynchronous processing; 3) using the partition id as the loop variable; and 4) setting retries. This is not listed as one of your solutions, but using it makes the batch size less critical, as the overall loop will succeed eventually.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.