Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

See when key Fabric features will launch and what’s already live, all in one place and always up to date. Explore the new Fabric roadmap

Reply
pbix
Helper III
Helper III

Loading data from ADLS Delta Table into Spark Dataframe is slow

Hi there, 

 

I'm experiencing slow read times when loading data from delta tables into data frames using PySpark in Synapse notebooks. 

 

This does not include the time taken for the Spark cluster to spin up. 

 

The delta table I am loading data from is relatively small, approximately 1 million rows and it takes about 30 seconds to load these rows into a dataframe.

 

Compared to SQL server this is very slow.

 

The simple syntax I'm using is:

 

df = spark.read.format("delta").load(deltasource).select("field1","field2","field3").
 
Data is text codes and dates - but is not especially wide
 
I am not doing any processing on this dataframe yet - just loading it. 
 
Are there any likely candidates for why the data frame loading speed is so slow? Synapse Serverless is much faster at loading this dataset as well. 
 
Thank you.
 
 
 
 
 
 
1 REPLY 1

Hi @pbix 

Where are you executing this query, is it in Fabric/Synapse. If it's in synapse what is the spark pool size used to run the notebook?

If you are using Fabric, what type of environment is it, Trail/Dedicated Capacity, if it's dedicated capacity what is the size of sku and node size, if it's trail what is the node size used?

In Synapse serverless, how did you test it, is it simply by select * from table or any other way?

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

May 2025 Monthly Update

Fabric Community Update - May 2025

Find out what's new and trending in the Fabric community.