Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
jwryu
Advocate II
Advocate II

MS Fabric resource allocation

Hello, need a help about Fabric's resource allocation

 

case when a notebook is running and it is using full resources(with autoscale on), 

what happens if other jobs such as notebooks, pipelines or Power BI are triggered?

what i can expect is two..

1) number of resource(cu) allocated to running notebook reduces and execute the later triggered jobs, runs concurrently. and due to this the running job which was already running gets slower.

2) resources are not re-allocated and the later triggered jobs are queued or getting failed

jwryu_1-1708304238241.png

 

 

I am also curious if the spark pool was defined as below, the later triggered jobs would not fail because there are still affordable resource?

jwryu_2-1708304738273.png

 

 

 

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @jwryu ,

Thanks for using Fabric Community.

Scenario 1: Resource Allocation and Concurrent Execution

  • If Fabric's default resource allocation behavior is followed, resources (CPUs) might be reallocated dynamically from the running notebook to accommodate incoming jobs. This would lead to slower execution of the already running notebook due to reduced resources.
  • However, you can configure custom resource limits for individual users, notebooks, or jobs to prevent unexpected resource reduction from existing tasks.

Scenario 2: No Re-allocation and Queuing/Failure

  • If autoscaling is disabled or if there are no available resources (even in a defined Spark pool), later triggered jobs will likely be queued or fail.
  • You can configure resource queues to prioritize or manage the execution order of queued jobs.

Considerations for Your Spark Pool Definition:

  • If your Spark pool has more available resources than the running notebook requires, additional jobs might be able to execute concurrently without impacting the original job, depending on Fabric's configuration.
  • However, be mindful of potential contention for shared resources like memory and network bandwidth, which could still slow down tasks.

Key Factors Influencing Behavior:

  • Fabric configuration: Specific settings for resource allocation, autoscaling, queues, and priority levels.
  • Notebook resource usage: The amount of resources (CPUs, memory) actively used by the running notebook.
  • Available resources: Total resources in your Fabric environment and within the specified Spark pool.
  • Job characteristics: Resource requirements (CPUs, memory) of other triggered jobs.

For more information, please refer to this documentation:
Spark workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Hope this is helpful. Please let me know incase of further queries

View solution in original post

1 REPLY 1
Anonymous
Not applicable

Hi @jwryu ,

Thanks for using Fabric Community.

Scenario 1: Resource Allocation and Concurrent Execution

  • If Fabric's default resource allocation behavior is followed, resources (CPUs) might be reallocated dynamically from the running notebook to accommodate incoming jobs. This would lead to slower execution of the already running notebook due to reduced resources.
  • However, you can configure custom resource limits for individual users, notebooks, or jobs to prevent unexpected resource reduction from existing tasks.

Scenario 2: No Re-allocation and Queuing/Failure

  • If autoscaling is disabled or if there are no available resources (even in a defined Spark pool), later triggered jobs will likely be queued or fail.
  • You can configure resource queues to prioritize or manage the execution order of queued jobs.

Considerations for Your Spark Pool Definition:

  • If your Spark pool has more available resources than the running notebook requires, additional jobs might be able to execute concurrently without impacting the original job, depending on Fabric's configuration.
  • However, be mindful of potential contention for shared resources like memory and network bandwidth, which could still slow down tasks.

Key Factors Influencing Behavior:

  • Fabric configuration: Specific settings for resource allocation, autoscaling, queues, and priority levels.
  • Notebook resource usage: The amount of resources (CPUs, memory) actively used by the running notebook.
  • Available resources: Total resources in your Fabric environment and within the specified Spark pool.
  • Job characteristics: Resource requirements (CPUs, memory) of other triggered jobs.

For more information, please refer to this documentation:
Spark workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Hope this is helpful. Please let me know incase of further queries

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.