Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi,
So complete newbie to the cloud coming from on premise MS stack.
Trying to put together a PoC end to end.
I have created a lakehouse.
So far I have dumped data from an external system to csv locally.
Plan is to use One lake file explorer to synch files to the cloud.
So now I have a folder with multiple csv files in my lakehouse
I want to use a notebook to read those files and dump them into parquet.
For the life of me I cannot find anything on how to loop the folder, tried os and glob, but I don't know what path to pass in.
Not sure this is the right approach, but the idea is I create a new folder each day of staging data csv files.
Somehow move it to parquet files, and compare the data from the day before to work out whats new and modified.
Then use dbt to transform data and finally load it to datawarehouse.
So back to load the path to loop in the notebook
Thanks
Solved! Go to Solution.
@jonjoseph if I understood correctly, you have this
Grab the ABFS path and utilize in the notebook
// Replace this with your actual folder path
val files = "abfss://workspace@onelake.dfs.fabric.microsoft.com/testLH2.Lakehouse/Files/DailyFiles"
// Read each CSV file in the folder
val df = spark.read.option("header", "true").csv(files)
.select("*", "_metadata.file_name","_metadata.file_modification_time")
display(df)
Copy file API path with os.listdir worked!
Hi @jonjoseph ,
Glad to know your issue got resolved. Please continue using Fabric Community for your further queries.
User | Count |
---|---|
25 | |
17 | |
6 | |
5 | |
2 |
User | Count |
---|---|
50 | |
43 | |
18 | |
7 | |
6 |