Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi,
Is there any way to get the llist of parquet files that are being used for the current Delta Table version loaded in a notebook? I have a daily overwrited Delta Table, so I have one parquet file by day. I guess the last version is only using last parquet file because I overwrite full table. How can I know programatically the name of that parquet file?
Solved! Go to Solution.
I've reached out the solution.
df.inputFiles()
Hi @amaaiia ,
Thanks for using Fabric Community.
In order to provide some suggestions would like to know why are you trying to find the parquet file? Any thing you are trying to do?
If you are looking for optimization - Delta Lake table optimization and V-Order - Microsoft Fabric | Microsoft Learn
If you are looking for versions of the table, you can get the version history.
Docs to refer -
Data versioning using Time Travel feature in Delta Lake | by Prachi Kushwah | Medium
Work with Delta Lake table history | Databricks on AWS
Hope this is helpful. Please let me know you use case if possible, I will try to guide you better.
I'm looking for parquet files because I'm having issues while reading same table from SQL Endpoint and from notebook, different data is shown from each one. So, for testing purposes, I want to check which parquet files are being read from notebook.
Hi @amaaiia ,
Docs - Microsoft Spark Utilities (MSSparkUtils) for Fabric - Microsoft Fabric | Microsoft Learn
From Lakehouse -
Inorder to load the parquet file -
Can you check whether this works for you?
User | Count |
---|---|
13 | |
4 | |
3 | |
3 | |
3 |
User | Count |
---|---|
8 | |
8 | |
7 | |
6 | |
5 |