Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at the 2025 Microsoft Fabric Community Conference. March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for $400 discount. Register now

Reply
hlbchant
Regular Visitor

Spark Sessions in MS Fabric Fail to Connect

I repeatedly get this error message when trying to run notebooks in my F2 capacity:

hlbchant_0-1727697606808.png

I have no active spark jobs when I search in the monitoring hub so am unsure how to remedy this issue. Any ideas?

 

5 REPLIES 5
AnmolGan81
Regular Visitor

I am too facing the same issue on F2 capacity, which capacity you move to in order to get pass this issue, for me there are no active spark sessions and jobs ongoing and still I am not able to connect to 1 single spark session, seems F2 capacity is just waste of money.

achrafcei
New Member

Hello,

 

Are you the only one using that capacity? My first instinct would be that it is running some workloads on another workspace to which you don't have access.

AwadFabric
Regular Visitor

Hi @hlbchant, as the other posts mention, the problem is most likely caused by capacity limitations.

I would suggest to try high concurrency which allows to run multiple notebooks within the same session. Since last week, this feature is also available for notebook executions within pipelines. 

Here is more information about it:

Configure high concurrency mode for notebooks - Microsoft Fabric | Microsoft Learn

Configure high concurrency mode for notebooks in pipelines - Microsoft Fabric | Microsoft Learn

v-cgao-msft
Community Support
Community Support

Hi @hlbchant ,

 

Once the max queue limit has been reached for a Fabric capacity, the new jobs submitted will be throttled with an error message [TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430.

Job queueing in Apache Spark for Fabric - Microsoft Fabric | Microsoft Learn

 

Also try:
You can also try waiting for some time after which resources may become available and you can try running your Spark job again. 
If there are multiple jobs that need to be run, try adding them to the queue to be run:
Introducing Job Queueing for Notebook in Microsoft Fabric | Microsoft Fabric Blog | Microsoft Fabric

Best Regards,
Gao

Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

How to get your questions answered quickly --  How to provide sample data in the Power BI Forum

digitalbrain
Helper I
Helper I

Hi @hlbchant , this error is caused by the fabric capacity, which is pointing to the fact that you've reached the limit. Although it has mentioned about looking at the active spark jobs in Monitoring hub, the error could also appear when multiple sessions are active.

 

Solution: Upgrade the SKU of fabric capacity or make sure there's only one session is active/running.

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

FebFBC_Carousel

Fabric Monthly Update - February 2025

Check out the February 2025 Fabric update to learn about new features.

Feb2025 NL Carousel

Fabric Community Update - February 2025

Find out what's new and trending in the Fabric community.