Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
MartinFM
Helper I
Helper I

InvalidHttpRequestToLivy: [TooManyRequestsForCapacity]

Hello,

 

I get this error every time I start running a notebook in Fabric:

InvalidHttpRequestToLivy: [TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430.

 

I am aware that this has been discussed before but my challenge is slightly different. I get the error even though I am not running anything. And the error has a timelimit. Suddenly everything works again.

 

Here is the context:

F2 SKU

I am the only user

It is a paid for capacity (not trial)

 

Example:

I get to work in the morning. Fabric has not been used since yesterday. I log in, find my notebook, run it, BAM, error pops up. Continuously.

A random amount of times go by, say 10 minutes to and hour. Suddenly everything works. I can run my notebooks however I please for the rest of the day. Next morning, same problem.

 

I seems to me that something must be running in the background but I cannot figure out what it is. When I inspect Monitor nothing is running. I have tried to install the Fabric Capacity Metrics app but I am honestly not aware if that app can show me what the problem is. I have not figured it out.

 

I hope you can help 🙂

1 ACCEPTED SOLUTION

Can you try to create a custom spark pool for the notebooks and set the node size to small, with autoscale and dynamically allocate executors swtiched off and set the values to 1 or 2? The standard spark pool is already consuming too much for an F2. You will need to connect the custom spark pool to your notebooks and try running it again.

View solution in original post

8 REPLIES 8
Anonymous
Not applicable

HI @MartinFM,

I'd like to suggest you take a look at the following links about the spark job and semantic model limitations of different SKU:

Concurrency limits and queueing in Apache Spark for Microsoft Fabric

What is Power BI Premium? - Power BI | Microsoft Learn

According to the document, F2 SKU queue limit is 4. You may need to upgrade the capacity SKU allow more requests.

Spoiler
Fabric capacity SKU Equivalent Power BI SKU Spark VCores Max Spark VCores with Burst Factor Queue limit
F2 - 4 20 4
F4 - 8 24 4
F8 - 16 48 8
F16 - 32 96 16
F32 - 64 192 32
F64 P1 128 384 64
F128 P2 256 768 128
F256 P3 512 1536 256
F512 P4 1024 3072 512
F1024 - 2048 6144 1024
F2048 - 4096 12288 2048
Trial Capacity P1 128 128 NA

Regards,

Xiaoxin Sheng

BTW, your link points to 404.

Hello,

Thank you for your reply.

So, to be more precise, this is a very simple Fabric instance. There are no semantic models. Update of the default semantic model has been switched off. I am only running on the default starter pool and no parameters have been changed for that. There are no scheduled jobs. There is a single lakehouse with maybe 100 quite small files in it. There is 6 runnable notebooks running Pyspark, all manually started. I only  run 1 notebook at a time.

I have installed the Fabric Metrics App. All graphs tops at 50% utilization. I have no data on overages. 

Given these circumstances I find it odd that I should be overutilizing the capacity. I would have expected to still have an abundance of compute.

I can confirm that the problem persists.

 

FabianSchut
Super User
Super User

Hi, Fabric applies "smoothing" to jobs that exceed their allocated compute units (CUs), allowing them to "borrow" CUs from future periods. For scheduled and background jobs, this borrowing can extend up to 24 hours: https://learn.microsoft.com/en-us/fabric/data-warehouse/compute-capacity-smoothing-throttling.

This could explain why jobs sometimes run unpredictably, like in the morning. This could explain why it will run at random times in the morning. I recommend checking Fabric's capacity metrics for any peak within the 24 hours leading up to the error.

 

Hello,

Thank you for your reply.

When I look in the metrics app everything seems to be ok. I am not fully aware how to read the app but all graphs tops at 50%. I have no graphs going above 100% utilization. I have no data on overages.

The link you send me seems to be specific for data warehouses. I do have a data warehouse in my workspace but there is nothing in it. I only run pyspark notebooks on a lakehouse and I only run one notebook at a time. I have no jobs scheduled.

Under these circumstances I find it odd that I should be overutilizing the capacity.

Can you try to create a custom spark pool for the notebooks and set the node size to small, with autoscale and dynamically allocate executors swtiched off and set the values to 1 or 2? The standard spark pool is already consuming too much for an F2. You will need to connect the custom spark pool to your notebooks and try running it again.

So, I managed to create a separate spark spool and environment. That seems to have helped. I can now consistently run different notebooks without the "TooManyReqeustsForCapacity" error. So the problem seems to be the starter pool ... and my lack of knowledge on utilizing these spark pools.

Yes, I need to acquire more knowledge. It is still odd to me that the starterpool do not work initially but then somehow frees up capacity and can run for almost the entirety of the day.

 

Thank you for your help. You saved my day 🙂

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.