Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
Hi,
So complete newbie to the cloud coming from on premise MS stack.
Trying to put together a PoC end to end.
I have created a lakehouse.
So far I have dumped data from an external system to csv locally.
Plan is to use One lake file explorer to synch files to the cloud.
So now I have a folder with multiple csv files in my lakehouse
I want to use a notebook to read those files and dump them into parquet.
For the life of me I cannot find anything on how to loop the folder, tried os and glob, but I don't know what path to pass in.
Not sure this is the right approach, but the idea is I create a new folder each day of staging data csv files.
Somehow move it to parquet files, and compare the data from the day before to work out whats new and modified.
Then use dbt to transform data and finally load it to datawarehouse.
So back to load the path to loop in the notebook
Thanks
Solved! Go to Solution.
@jonjoseph if I understood correctly, you have this
Grab the ABFS path and utilize in the notebook
// Replace this with your actual folder path
val files = "abfss://workspace@onelake.dfs.fabric.microsoft.com/testLH2.Lakehouse/Files/DailyFiles"
// Read each CSV file in the folder
val df = spark.read.option("header", "true").csv(files)
.select("*", "_metadata.file_name","_metadata.file_modification_time")
display(df)
Copy file API path with os.listdir worked!
Hi @jonjoseph ,
Glad to know your issue got resolved. Please continue using Fabric Community for your further queries.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.