Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hi there,
I'm experiencing slow read times when loading data from delta tables into data frames using PySpark in Synapse notebooks.
This does not include the time taken for the Spark cluster to spin up.
The delta table I am loading data from is relatively small, approximately 1 million rows and it takes about 30 seconds to load these rows into a dataframe.
Compared to SQL server this is very slow.
The simple syntax I'm using is:
Hi @Anonymous
Where are you executing this query, is it in Fabric/Synapse. If it's in synapse what is the spark pool size used to run the notebook?
If you are using Fabric, what type of environment is it, Trail/Dedicated Capacity, if it's dedicated capacity what is the size of sku and node size, if it's trail what is the node size used?
In Synapse serverless, how did you test it, is it simply by select * from table or any other way?
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.