Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Liam_McCauley
Frequent Visitor

Predefined Spark resource profiles

Inspired by this blog entry, I've been looking into using predefined Spark resource profiles: Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric | Micro...

 

The use cases seem quite straightforward, and I don't see any reason not to use ReadHeavyForPBI for our Gold layer.

But, how do you decide between ReadHeavyForSpark or WriteHeavy for Bronze and Silver layers?

For Bronze and Silver tables that will end up as facts in our Gold layer, should you use WriteHeavy?

But for tables that will end up as slowly changing dimensions, would it be best to use ReadHeavyForSpark, as we will spend more time reading them than writing to them?

 

Has anyone measured any of these scenarios, and come up with recommendations?

 

 

A quick description of our architecture, for context:

  • We are using a medallion architecture with Bronze, Silver, Gold Lakehouses, each in their own workspaces.
  • We store Notebooks and Data pipelines in their own workspace that we call "process".
  • We process fact and dimension data for multiple business areas.
  • Volumes vary between hundreds of records, and 100M records per month, depending on the data source.
1 ACCEPTED SOLUTION
Vinodh247
Resolver III
Resolver III

Good qusestion!

 

If you want to validate this approach, a practical solution is to:

  1. Use Fabric's Activity Runs or Spark History to measure duration with each profile.

  2. Keep volume constant, switch profile, and compare metrics like CPU Time, Shuffle Read/Write, and Cached Memory.

There are no public benchmarks from Microsoft for these specific scenarios, but i believe they have based this on early adopter feedback and internal testing from Fabric preview days (i assume)

  • WriteHeavy consistently reduces latency during large ingestions and merges.

  • ReadHeavyForSpark shows noticeable improvements in transformation heavy pipelines, especially those with large joins.

  • ReadHeavyForPBI makes PBI DL reports slick and more stable under the load.

The rationale behind tagging each layer is commonly based on below:

Bronze Layer (Raw Ingestion)

  • Recommended: Use WriteHeavy

    • Data is typically appended.

    • You are not reading it often; transformations happen downstream.

    • Prioritise write throughput and ingestion latency.

     

Silver Layer (Cleansed, Business Logic Applied)

  • Decision Point: Depends on your operations per table.

If table is append only and used in fact pipelines (transactional facts?):

  • Use: WriteHeavy

  • Optimise ETL throughput, especially if you are reading directly from Bronze and writing enriched data.

If table is dim like (ex: SCD & lookups) and used across many pipelines:

  • Use: ReadHeavyForSpark

    • These tables are typically read heavy across many processes (joins, lookups).

    • The frequency and cost of reads outweigh the write overhead.

    • True for SCD 2, where point in time analysis needs frequent reads with filters.

     

Gold Layer (Consumption/Visualization)

  • Use: ReadHeavyForPBI

    • Designed for consumption.

    • Read latency impacts user experience.

    • Optimised for directquery and DirectLake queries in PBI.

     

     

Please 'Kudos' and 'Accept as Solution' if this answered your query.

View solution in original post

2 REPLIES 2
v-veshwara-msft
Community Support
Community Support

Hi @Liam_McCauley ,

Thanks for raising this in the Fabric Community Forum and thanks to @Vinodh247 for the detailed input.

As mentioned by @Vinodh247 , a commonly observed pattern is to use WriteHeavy for ingestion-heavy Bronze, ReadHeavyForSpark for dimension tables in Silver that are frequently read or joined, and ReadHeavyForPBI in the Gold layer where reporting performance is a focus.

 

For the Silver layer, selecting between WriteHeavy and ReadHeavyForSpark often depends on how the tables are accessed. Tables that are primarily written in bulk and used in fact processing tend to follow the WriteHeavy approach. In contrast, dimension tables, particularly those with SCD logic or used across multiple pipelines, may benefit from ReadHeavyForSpark due to more read-intensive operations.

 

If you're evaluating these options, comparing Spark history or Activity Runs across different profiles on the same workload can help identify which setting works best in your context.

 

You can find the official documentation on resource profiles here:
Configure Resource Profile Configurations in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

 

Hope this helps. Please reach out for further assistance.

Please consider marking the helpful reply as Accepted Solution to assist others with similar queries.

Thank you.

Vinodh247
Resolver III
Resolver III

Good qusestion!

 

If you want to validate this approach, a practical solution is to:

  1. Use Fabric's Activity Runs or Spark History to measure duration with each profile.

  2. Keep volume constant, switch profile, and compare metrics like CPU Time, Shuffle Read/Write, and Cached Memory.

There are no public benchmarks from Microsoft for these specific scenarios, but i believe they have based this on early adopter feedback and internal testing from Fabric preview days (i assume)

  • WriteHeavy consistently reduces latency during large ingestions and merges.

  • ReadHeavyForSpark shows noticeable improvements in transformation heavy pipelines, especially those with large joins.

  • ReadHeavyForPBI makes PBI DL reports slick and more stable under the load.

The rationale behind tagging each layer is commonly based on below:

Bronze Layer (Raw Ingestion)

  • Recommended: Use WriteHeavy

    • Data is typically appended.

    • You are not reading it often; transformations happen downstream.

    • Prioritise write throughput and ingestion latency.

     

Silver Layer (Cleansed, Business Logic Applied)

  • Decision Point: Depends on your operations per table.

If table is append only and used in fact pipelines (transactional facts?):

  • Use: WriteHeavy

  • Optimise ETL throughput, especially if you are reading directly from Bronze and writing enriched data.

If table is dim like (ex: SCD & lookups) and used across many pipelines:

  • Use: ReadHeavyForSpark

    • These tables are typically read heavy across many processes (joins, lookups).

    • The frequency and cost of reads outweigh the write overhead.

    • True for SCD 2, where point in time analysis needs frequent reads with filters.

     

Gold Layer (Consumption/Visualization)

  • Use: ReadHeavyForPBI

    • Designed for consumption.

    • Read latency impacts user experience.

    • Optimised for directquery and DirectLake queries in PBI.

     

     

Please 'Kudos' and 'Accept as Solution' if this answered your query.

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.