Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
On occasions, executing Notebook in Fabric stops with this error :
"InvalidHttpRequestToLivy: [TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430."
What could cause this and what should I do to? Like I said, this happens only from time to time, and I haven't found any root cause for the error.
Solved! Go to Solution.
Each Fabric workspace has a capacity SKU that defines the available resources. When too many jobs run concurrently, or when jobs are large or long-running, it may exhaust; Memory, CPU (vCores), Concurrency slots for Spark jobs. This is workspace-wide, so parallel workloads (even from other users) may push you over the threshold.
Fabric uses Apache Livy under the hood for job submission and monitoring. If multiple Spark jobs (especially via Notebooks, pipelines, or automation) are submitted in quick succession, Livy may throttle requests.
You can retry later or introduce delay logic and run jobs off-peak. Easiest solution is to upgrade to a larger SKU
Please mark this post as solution if it helps you. Appreciate Kudos.
Hi @askojuvonen ,
Thank you for reaching out to the Microsoft Fabric Community.
As @andrewsommer correctly pointed out, the error you are encountering typically occurs when your workspace exceeds its Spark compute or API rate limits. This can result from high concurrency, large or long-running Spark jobs, or multiple job submissions in quick succession. Since Microsoft Fabric uses Apache Livy to orchestrate Spark job execution, exceeding these thresholds may lead to request throttling, resulting in the HTTP 430 response you observed.
These limits are enforced at the workspace level, meaning that parallel workloads submitted by other users within the same workspace may also contribute to resource exhaustion. To mitigate this, we recommend reviewing active Spark jobs through the Monitoring Hub, introducing retry or delay logic for automated processes, and scheduling resource-intensive jobs during off-peak hours. If high concurrency is expected on a regular basis, upgrading to a higher capacity SKU may be necessary to ensure sufficient compute availability.
For further technical reference, please consult the following Microsoft Learn articles:
Concurrency limits and queueing in Apache Spark for Microsoft Fabric
Job queueing in Apache Spark for Microsoft Fabric
I hope my suggestions give you good idea, if you need any further assistance, feel free to reach out.
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thank you.
Hi @askojuvonen ,
Thank you for reaching out to the Microsoft Fabric Community.
As @andrewsommer correctly pointed out, the error you are encountering typically occurs when your workspace exceeds its Spark compute or API rate limits. This can result from high concurrency, large or long-running Spark jobs, or multiple job submissions in quick succession. Since Microsoft Fabric uses Apache Livy to orchestrate Spark job execution, exceeding these thresholds may lead to request throttling, resulting in the HTTP 430 response you observed.
These limits are enforced at the workspace level, meaning that parallel workloads submitted by other users within the same workspace may also contribute to resource exhaustion. To mitigate this, we recommend reviewing active Spark jobs through the Monitoring Hub, introducing retry or delay logic for automated processes, and scheduling resource-intensive jobs during off-peak hours. If high concurrency is expected on a regular basis, upgrading to a higher capacity SKU may be necessary to ensure sufficient compute availability.
For further technical reference, please consult the following Microsoft Learn articles:
Concurrency limits and queueing in Apache Spark for Microsoft Fabric
Job queueing in Apache Spark for Microsoft Fabric
I hope my suggestions give you good idea, if you need any further assistance, feel free to reach out.
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thank you.
Each Fabric workspace has a capacity SKU that defines the available resources. When too many jobs run concurrently, or when jobs are large or long-running, it may exhaust; Memory, CPU (vCores), Concurrency slots for Spark jobs. This is workspace-wide, so parallel workloads (even from other users) may push you over the threshold.
Fabric uses Apache Livy under the hood for job submission and monitoring. If multiple Spark jobs (especially via Notebooks, pipelines, or automation) are submitted in quick succession, Livy may throttle requests.
You can retry later or introduce delay logic and run jobs off-peak. Easiest solution is to upgrade to a larger SKU
Please mark this post as solution if it helps you. Appreciate Kudos.
User | Count |
---|---|
75 | |
45 | |
15 | |
11 | |
7 |
User | Count |
---|---|
88 | |
88 | |
28 | |
8 | |
7 |