Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
BriefStop
Frequent Visitor

Setting a Spark cluste when running a notebook or running anything else(eg. pipeline, dataflow etc)?

Normally in Databricks I can set the cluster that a notebook would run on in the box in the top right of a notebook. However in a Fabric notebook, I can't see it in the top right corner anymore.

How can I assign a cluster to a notebook or any other activity eg. pipeline?

Fabric notebook - where's the cluster option?Fabric notebook - where's the cluster option?Databricks cluster selection in notebookDatabricks cluster selection in notebook

2 REPLIES 2
Srisakthi
Super User
Super User

Hi @BriefStop ,

 

You can set the spark properties in Workspace setting, you can create pool based on your requirements for spark nodes and attach to your workspace. You can create new environment on which you can specify the executor core, memory, dynamic allocation to be used. using Spark properties tab you can add your environment specific spark properties, Libraries.

please refer screenshot - 

Srisakthi_0-1729579433974.pngSrisakthi_1-1729579452756.png

 

Thanks,

Srisakthi

 

---------------------------------------------------------------------------

If this answers your question please mark as solution accepted.

frithjof_v
Super User
Super User

For Notebook, it's in the Environment dropdown.

 

For general use cases, I think I would just use the default, i.e. Starter pool, as they have shorter startup times.

 

In that case, you don't need to think about cluster. The Notebook will automatically use the default pool (cluster) if we don't specify otherwise.

 

I think only Notebooks (and Spark Job Definitons) use Spark.

 

I'm not aware that Data Pipeline or Dataflow Gen2 use Spark. I think they use another technology which is fully managed (hidden from us).

If you run a Notebook in Data Pipeline, it will use Spark.

 

Here is a link to some information about Apache Spark in Fabric:

https://learn.microsoft.com/en-us/fabric/data-engineering/spark-compute

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.

Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.