Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
Anonymous
Not applicable

Export entire delta table from lakehouse

Is there any way to get a .csv onto my computer from a delta table in a lakehouse? I haven't found any viable approach so far despite trying.

1 ACCEPTED SOLUTION
FabianSchut
Super User
Super User

One way to achieve this is to write a small Python script in a notebook that reads the delta table from your lakehouse and exports it to the Files section of your lakehouse. It may be convinient to make a pandas dataframe after you've read the delta table with spark and use pandas 'to_csv' functionality.
With the OneLake file explorer: https://www.microsoft.com/en-us/download/details.aspx?id=105222, you could view and copy this csv to your local directory on your computer.

View solution in original post

7 REPLIES 7
FabianSchut
Super User
Super User

One way to achieve this is to write a small Python script in a notebook that reads the delta table from your lakehouse and exports it to the Files section of your lakehouse. It may be convinient to make a pandas dataframe after you've read the delta table with spark and use pandas 'to_csv' functionality.
With the OneLake file explorer: https://www.microsoft.com/en-us/download/details.aspx?id=105222, you could view and copy this csv to your local directory on your computer.

Anonymous
Not applicable

Brilliant, it works. FYI, if I use

df.write.csv("Files/my_csv.csv")

to write the file, it keeps the delta table file structure, i.e. it makes a folder named "my_csv.csv". However, the file(s) are in csv, so for my purposes it just works.

If you use Pandas to_csv(), instead of PySpark write.csv(), you will get a single CSV file.

 

https://learn.microsoft.com/en-us/fabric/data-science/read-write-pandas

 

Pandas cannot work with such large data volumes as Spark. 

https://www.reddit.com/r/MicrosoftFabric/s/7EloDpd8wI

 

If your amount of data is not too large volume, then I guess I would use Pandas for writing to a single csv.

But seriously, converting a Delta file back to CSV is a travesty. Can your application not work with Parquet?

Anonymous
Not applicable

My goals are beyond your understanding. 

¯\_(ツ)_/¯

lbendlin
Super User
Super User

The Delta tables are already in Parquet format. it would be counterproductive to convert that to CSV.

 

"the entire table"  - careful with that. Parquet supports versioning so you risk exporting all the prior versions too.

Anonymous
Not applicable

There is only one version.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.