Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedJoin us at the 2025 Microsoft Fabric Community Conference. March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for $400 discount. Register now
I repeatedly get this error message when trying to run notebooks in my F2 capacity:
I have no active spark jobs when I search in the monitoring hub so am unsure how to remedy this issue. Any ideas?
I am too facing the same issue on F2 capacity, which capacity you move to in order to get pass this issue, for me there are no active spark sessions and jobs ongoing and still I am not able to connect to 1 single spark session, seems F2 capacity is just waste of money.
Hello,
Are you the only one using that capacity? My first instinct would be that it is running some workloads on another workspace to which you don't have access.
Hi @hlbchant, as the other posts mention, the problem is most likely caused by capacity limitations.
I would suggest to try high concurrency which allows to run multiple notebooks within the same session. Since last week, this feature is also available for notebook executions within pipelines.
Here is more information about it:
Configure high concurrency mode for notebooks - Microsoft Fabric | Microsoft Learn
Configure high concurrency mode for notebooks in pipelines - Microsoft Fabric | Microsoft Learn
Hi @hlbchant ,
Once the max queue limit has been reached for a Fabric capacity, the new jobs submitted will be throttled with an error message [TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430.
Job queueing in Apache Spark for Fabric - Microsoft Fabric | Microsoft Learn
Also try:
You can also try waiting for some time after which resources may become available and you can try running your Spark job again.
If there are multiple jobs that need to be run, try adding them to the queue to be run:
Introducing Job Queueing for Notebook in Microsoft Fabric | Microsoft Fabric Blog | Microsoft Fabric
Best Regards,
Gao
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
How to get your questions answered quickly -- How to provide sample data in the Power BI Forum
Hi @hlbchant , this error is caused by the fabric capacity, which is pointing to the fact that you've reached the limit. Although it has mentioned about looking at the active spark jobs in Monitoring hub, the error could also appear when multiple sessions are active.
Solution: Upgrade the SKU of fabric capacity or make sure there's only one session is active/running.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Fabric update to learn about new features.
User | Count |
---|---|
33 | |
3 | |
3 | |
2 | |
2 |
User | Count |
---|---|
16 | |
7 | |
7 | |
5 | |
4 |