- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Issue with Microsoft Fabric Capacity F8 or higher for Spark Pool when running notebooks in parallel
I have been utilizing Microsoft Fabric capacity F8, as per Microsoft documentation, which specifies that F8 comes with 2 nodes for the Spark pool (https://learn.microsoft.com/en-us/fabric/data-engineering/configure-starter-pools).
Last month (March 2024), I had 2 notebook jobs running simultaneously, and they worked smoothly until the beginning of this month (April 2024) when they could no longer run concurrently.
While running the first notebook, there were no errors. However, the second notebook encountered an error: "Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - Exception, Error value - Failed to create Livy session for executing notebook. Error: [TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430.' : ".
Subsequently, I upgraded the capacity to F16, but the issue persisted, despite F16 potentially having 3 nodes.
My workspace configuration is as follows:
Workspace Environment: Runtime Version 1.1
I hope Microsoft resolves this issue promptly or provides a solution to revert to the previous state. It's becoming challenging to use Fabric for a production environment.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your assistance. I identified that the issue originated from the Southeast Asia region. I have now switched to the East Asia region, and everything seems to be working fine.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @nguyenhieubis ,
Apologize for the issue you are facing. The best course of action is to open a support ticket and have our support team take a closer look at it.
Please reach out to our support team so they can do a more thorough investigation on why this it is happening: Link
After creating a Support ticket please provide the ticket number as it would help us to track for more information.
Hope this helps. Please let us know if you have any other queries.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @nguyenhieubis ,
We haven’t heard from you on the last response and was just checking back to see if you got a chance to create a support ticket.
After creating a Support ticket please provide the ticket number as it would help us to track for more information.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for your assistance. I identified that the issue originated from the Southeast Asia region. I have now switched to the East Asia region, and everything seems to be working fine.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Glad to know that you were able resolve your issue. Please continue using Fabric Community on your further queries.

Helpful resources
Subject | Author | Posted | |
---|---|---|---|
12-25-2024 08:57 AM | |||
11-21-2024 05:24 AM | |||
02-19-2025 02:38 AM | |||
08-06-2024 06:59 AM | |||
01-28-2025 08:26 PM |
User | Count |
---|---|
23 | |
15 | |
8 | |
7 | |
2 |
User | Count |
---|---|
31 | |
25 | |
22 | |
15 | |
12 |