Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Grow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.

Reply
jwryu
Helper I
Helper I

MS Fabric resource allocation

Hello, need a help about Fabric's resource allocation

 

case when a notebook is running and it is using full resources(with autoscale on), 

what happens if other jobs such as notebooks, pipelines or Power BI are triggered?

what i can expect is two..

1) number of resource(cu) allocated to running notebook reduces and execute the later triggered jobs, runs concurrently. and due to this the running job which was already running gets slower.

2) resources are not re-allocated and the later triggered jobs are queued or getting failed

jwryu_1-1708304238241.png

 

 

I am also curious if the spark pool was defined as below, the later triggered jobs would not fail because there are still affordable resource?

jwryu_2-1708304738273.png

 

 

 

1 ACCEPTED SOLUTION
v-gchenna-msft
Community Support
Community Support

Hi @jwryu ,

Thanks for using Fabric Community.

Scenario 1: Resource Allocation and Concurrent Execution

  • If Fabric's default resource allocation behavior is followed, resources (CPUs) might be reallocated dynamically from the running notebook to accommodate incoming jobs. This would lead to slower execution of the already running notebook due to reduced resources.
  • However, you can configure custom resource limits for individual users, notebooks, or jobs to prevent unexpected resource reduction from existing tasks.

Scenario 2: No Re-allocation and Queuing/Failure

  • If autoscaling is disabled or if there are no available resources (even in a defined Spark pool), later triggered jobs will likely be queued or fail.
  • You can configure resource queues to prioritize or manage the execution order of queued jobs.

Considerations for Your Spark Pool Definition:

  • If your Spark pool has more available resources than the running notebook requires, additional jobs might be able to execute concurrently without impacting the original job, depending on Fabric's configuration.
  • However, be mindful of potential contention for shared resources like memory and network bandwidth, which could still slow down tasks.

Key Factors Influencing Behavior:

  • Fabric configuration: Specific settings for resource allocation, autoscaling, queues, and priority levels.
  • Notebook resource usage: The amount of resources (CPUs, memory) actively used by the running notebook.
  • Available resources: Total resources in your Fabric environment and within the specified Spark pool.
  • Job characteristics: Resource requirements (CPUs, memory) of other triggered jobs.

For more information, please refer to this documentation:
Spark workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Hope this is helpful. Please let me know incase of further queries

View solution in original post

1 REPLY 1
v-gchenna-msft
Community Support
Community Support

Hi @jwryu ,

Thanks for using Fabric Community.

Scenario 1: Resource Allocation and Concurrent Execution

  • If Fabric's default resource allocation behavior is followed, resources (CPUs) might be reallocated dynamically from the running notebook to accommodate incoming jobs. This would lead to slower execution of the already running notebook due to reduced resources.
  • However, you can configure custom resource limits for individual users, notebooks, or jobs to prevent unexpected resource reduction from existing tasks.

Scenario 2: No Re-allocation and Queuing/Failure

  • If autoscaling is disabled or if there are no available resources (even in a defined Spark pool), later triggered jobs will likely be queued or fail.
  • You can configure resource queues to prioritize or manage the execution order of queued jobs.

Considerations for Your Spark Pool Definition:

  • If your Spark pool has more available resources than the running notebook requires, additional jobs might be able to execute concurrently without impacting the original job, depending on Fabric's configuration.
  • However, be mindful of potential contention for shared resources like memory and network bandwidth, which could still slow down tasks.

Key Factors Influencing Behavior:

  • Fabric configuration: Specific settings for resource allocation, autoscaling, queues, and priority levels.
  • Notebook resource usage: The amount of resources (CPUs, memory) actively used by the running notebook.
  • Available resources: Total resources in your Fabric environment and within the specified Spark pool.
  • Job characteristics: Resource requirements (CPUs, memory) of other triggered jobs.

For more information, please refer to this documentation:
Spark workspace administration settings in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Hope this is helpful. Please let me know incase of further queries

Helpful resources

Announcements
RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

Expanding the Synapse Forums

New forum boards available in Synapse

Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.

MayFabricCarousel

Fabric Monthly Update - May 2024

Check out the May 2024 Fabric update to learn about new features.

Top Kudoed Authors