Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
The first two lines are working fine on powerbi.com in the browser, and not in VS Code.
The second two lines are working fine in both.
How can I get the relative path to work in VS Code? Happy to use a workaround if necessary (e.g Absolute path).
Solved! Go to Solution.
Hi @matthewstainsby
We don't support mssparkutils or mount lakehouse in VS Code currently, so you cannot do these file system operations in our extension. You can do these concat operations on Fabric Portal, and then read these data in VSCode.
Here is Lakehouse public api:
Items - List Lakehouses - REST API (Lakehouse) | Microsoft Learn
You can use this to list lakehouses files too.
Hope this helps. Please let me know if you have any further questions.
Thanks for using Fabric Community.
Can you please tell me what the get_ folder_contents function does? Please share the code, so I can understand better.
Thanks
Hello!
The function below is returning a file not found error:
[Errno 2] No such file or directory: '/lakehouse/default/Files/PDS_Files/'
However, when I use the same filepath in the Power BI service, the function is correctly returning the list of files.
def get_folder_contents(pds_dir):
"""Return a list of filepaths in the current directory. Ignores subdirectories."""
f_paths = []
for filename in os.listdir(pds_dir):
file_path = os.path.join(pds_dir, filename)
if os.path.isfile(file_path):
f_paths.append(file_path)
return f_paths
Hi @matthewstainsby
Thanks for the details.
Fabric mounts the default lakehouse in to notebook, so you can read files in default lakehouse by absolute path '/lakehouse/default/'. But this method doesn't work in VS Code by design.
You can use spark api to read file/folder in VS Code.
Load data into your Lakehouse with a notebook - Microsoft Fabric | Microsoft Learn
example:
df = spark.read.format("csv").option("header","true").load("Files/car_purchasing.csv")
Hope this helps. Please let me know if you have any further questions.
Hi @matthewstainsby
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. Otherwise, will respond back with the more details and we will try to help.
Thanks
Hello and thank you for the response
The problem is that I have a lot of files (>500), so using the approach you suggested would not be possible.
Reading the list would allow me to do something like this, which would append all the dfs together.
If I can read out the list of files it could make our ETL in VS code possible.
dfs = []
for f_path in f_paths:
dfs.append(spark.read.format("csv").option("header","true").load(f_path))
df_combined = df.concat(dfs)
Hi @matthewstainsby
We don't support mssparkutils or mount lakehouse in VS Code currently, so you cannot do these file system operations in our extension. You can do these concat operations on Fabric Portal, and then read these data in VSCode.
Here is Lakehouse public api:
Items - List Lakehouses - REST API (Lakehouse) | Microsoft Learn
You can use this to list lakehouses files too.
Hope this helps. Please let me know if you have any further questions.
Thank you so much, I'll look into using the API and open another thread if I can't find a solution.
Hi @matthewstainsby
Glad that your query got resolved. Please continue using Fabric Community for any help regarding your queries.
Hi @matthewstainsby
Glad that your query got resolved. Please continue using Fabric Community for any help regarding your queries.
Hi @matthewstainsby
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. Otherwise, will respond back with the more details and we will try to help.
Thanks
User | Count |
---|---|
29 | |
22 | |
11 | |
7 | |
7 |