Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Hi,
So complete newbie to the cloud coming from on premise MS stack.
Trying to put together a PoC end to end.
I have created a lakehouse.
So far I have dumped data from an external system to csv locally.
Plan is to use One lake file explorer to synch files to the cloud.
So now I have a folder with multiple csv files in my lakehouse
I want to use a notebook to read those files and dump them into parquet.
For the life of me I cannot find anything on how to loop the folder, tried os and glob, but I don't know what path to pass in.
Not sure this is the right approach, but the idea is I create a new folder each day of staging data csv files.
Somehow move it to parquet files, and compare the data from the day before to work out whats new and modified.
Then use dbt to transform data and finally load it to datawarehouse.
So back to load the path to loop in the notebook
Thanks
Solved! Go to Solution.
@jonjoseph if I understood correctly, you have this
Grab the ABFS path and utilize in the notebook
// Replace this with your actual folder path
val files = "abfss://workspace@onelake.dfs.fabric.microsoft.com/testLH2.Lakehouse/Files/DailyFiles"
// Read each CSV file in the folder
val df = spark.read.option("header", "true").csv(files)
.select("*", "_metadata.file_name","_metadata.file_modification_time")
display(df)
Copy file API path with os.listdir worked!
Hi @jonjoseph ,
Glad to know your issue got resolved. Please continue using Fabric Community for your further queries.
User | Count |
---|---|
39 | |
10 | |
4 | |
3 | |
2 |
User | Count |
---|---|
47 | |
17 | |
7 | |
6 | |
6 |