Fabric is Generally Available. Browse Fabric Presentations. Work towards your Fabric certification with the Cloud Skills Challenge.
Hi there,
I'm experiencing slow read times when loading data from delta tables into data frames using PySpark in Synapse notebooks.
This does not include the time taken for the Spark cluster to spin up.
The delta table I am loading data from is relatively small, approximately 1 million rows and it takes about 30 seconds to load these rows into a dataframe.
Compared to SQL server this is very slow.
The simple syntax I'm using is:
Hi @pbix
Where are you executing this query, is it in Fabric/Synapse. If it's in synapse what is the spark pool size used to run the notebook?
If you are using Fabric, what type of environment is it, Trail/Dedicated Capacity, if it's dedicated capacity what is the size of sku and node size, if it's trail what is the node size used?
In Synapse serverless, how did you test it, is it simply by select * from table or any other way?
User | Count |
---|---|
19 | |
4 | |
4 | |
4 | |
3 |