Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
askojuvonen
New Member

"This spark job can't be run because you have hit a spark compute or API rate limit"

On occasions,  executing Notebook in Fabric stops with this error :

 

"InvalidHttpRequestToLivy: [TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430."

 

What could cause this and what should I do to?  Like I said, this happens only from time to time, and I haven't found any root cause for the error.

2 ACCEPTED SOLUTIONS
andrewsommer
Memorable Member
Memorable Member

Each Fabric workspace has a capacity SKU that defines the available resources. When too many jobs run concurrently, or when jobs are large or long-running, it may exhaust; Memory, CPU (vCores), Concurrency slots for Spark jobs.  This is workspace-wide, so parallel workloads (even from other users) may push you over the threshold.

 

Fabric uses Apache Livy under the hood for job submission and monitoring.  If multiple Spark jobs (especially via Notebooks, pipelines, or automation) are submitted in quick succession, Livy may throttle requests.

 

You can retry later or introduce delay logic and run jobs off-peak.  Easiest solution is to upgrade to a larger SKU

 

Please mark this post as solution if it helps you. Appreciate Kudos.

View solution in original post

v-tsaipranay
Community Support
Community Support

Hi @askojuvonen ,

Thank you for reaching out to the Microsoft Fabric Community.

 

As @andrewsommer correctly pointed out, the error you are encountering typically occurs when your workspace exceeds its Spark compute or API rate limits. This can result from high concurrency, large or long-running Spark jobs, or multiple job submissions in quick succession. Since Microsoft Fabric uses Apache Livy to orchestrate Spark job execution, exceeding these thresholds may lead to request throttling, resulting in the HTTP 430 response you observed.

These limits are enforced at the workspace level, meaning that parallel workloads submitted by other users within the same workspace may also contribute to resource exhaustion. To mitigate this, we recommend reviewing active Spark jobs through the Monitoring Hub, introducing retry or delay logic for automated processes, and scheduling resource-intensive jobs during off-peak hours. If high concurrency is expected on a regular basis, upgrading to a higher capacity SKU may be necessary to ensure sufficient compute availability.

 

For further technical reference, please consult the following Microsoft Learn articles:

Concurrency limits and queueing in Apache Spark for Microsoft Fabric

Job queueing in Apache Spark for Microsoft Fabric

 

I hope my suggestions give you good idea, if you need any further assistance, feel free to reach out.

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you. 

View solution in original post

2 REPLIES 2
v-tsaipranay
Community Support
Community Support

Hi @askojuvonen ,

Thank you for reaching out to the Microsoft Fabric Community.

 

As @andrewsommer correctly pointed out, the error you are encountering typically occurs when your workspace exceeds its Spark compute or API rate limits. This can result from high concurrency, large or long-running Spark jobs, or multiple job submissions in quick succession. Since Microsoft Fabric uses Apache Livy to orchestrate Spark job execution, exceeding these thresholds may lead to request throttling, resulting in the HTTP 430 response you observed.

These limits are enforced at the workspace level, meaning that parallel workloads submitted by other users within the same workspace may also contribute to resource exhaustion. To mitigate this, we recommend reviewing active Spark jobs through the Monitoring Hub, introducing retry or delay logic for automated processes, and scheduling resource-intensive jobs during off-peak hours. If high concurrency is expected on a regular basis, upgrading to a higher capacity SKU may be necessary to ensure sufficient compute availability.

 

For further technical reference, please consult the following Microsoft Learn articles:

Concurrency limits and queueing in Apache Spark for Microsoft Fabric

Job queueing in Apache Spark for Microsoft Fabric

 

I hope my suggestions give you good idea, if you need any further assistance, feel free to reach out.

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you. 

andrewsommer
Memorable Member
Memorable Member

Each Fabric workspace has a capacity SKU that defines the available resources. When too many jobs run concurrently, or when jobs are large or long-running, it may exhaust; Memory, CPU (vCores), Concurrency slots for Spark jobs.  This is workspace-wide, so parallel workloads (even from other users) may push you over the threshold.

 

Fabric uses Apache Livy under the hood for job submission and monitoring.  If multiple Spark jobs (especially via Notebooks, pipelines, or automation) are submitted in quick succession, Livy may throttle requests.

 

You can retry later or introduce delay logic and run jobs off-peak.  Easiest solution is to upgrade to a larger SKU

 

Please mark this post as solution if it helps you. Appreciate Kudos.

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.