The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
df.write \
.mode("overwrite") \
.synapsesql(f"{silverWarehouse}.{tableSchema}.{silver_table}")
FROM 'abfss://{workspace_id_hyphen}@{workspace_id}.zd9.dfs.fabric.microsoft.com/_system/artifacts/{notbook_id}/5f15f8eb-09f7-4148-aea3-e293ec52726c/user/trusted-service-user/{table_name}2654d7047d3b407281f0513a1950d92e/*.parquet'
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Content of directory on path
'https://{workspace_id}.zd9.dfs.fabric.microsoft.com/{workspace_id_hyphen}/_system/artifacts/{notbook_id}/5f15f8eb-09f7-4148-aea3-e293ec52726c/user/trusted-service-user/{table_name}2654d7047d3b407281f0513a1950d92e/*.parquet' cannot be listed.
It's the read that is failing not the write, would you agree?
Have you redacted these paths? The error says '{workspace_id}' is that you redacting the output or has something gone wrong with you compiling your path?
It seems that the issue is somewhere else, not in the write snippet you have supplied.
Thanks justinjbird, but where can we change the SAS key for OneLake ?
Think is that code mentionned work fine for long time and we don't have any issue. The purpose is to save Dataframe transformed from Lakehouse to Warehouse.
MS Fabric offer "synapsesql" methode to do the staff, it's work fine until five day ago
This is not assisting us in the first place. What does it mean by silver warehouse? At first, was there a bronze table working before? Can you share some screenshots?
You can eaisly reprodure the error, try with notebook to read data from MS Fabric Lakehouse and save it to MS Fabric Warehouse
Thanks "BahveshPatel" for your reponse. Sorry, some data and screenshot are sensitives so i can't share that.
But i can explain again the issue :
We work with medallion architecture, we use spark notebook to read the data from "bronze lakehouse", and then transforme it and finally we wrote the transformed dataframe in "Fabric Warehouse". This warehouse we name it "Silver warehouse".
In the notebook, we use "synapsesql" methode to write the dataframe to the warehouse (the source code shared with you). This methode run without any issue until 5 days ago. indicate that we can not list parquet file, and seems to SAS Key to change, but nothing changed in our environnement Fabric.
Is that clear enough for you ? is that help you to figure out the issue ?
Béchir
Due to change into synapsesql, means that synapse sql should not be used going forward ( depreciate ) we need to use Databricks Delta Lake Open Source table in broze+ silver warehouse. synapse sql is using data lake whereas databricks delta lake is using _delta_log( Transaction log ) with parquet files. You should not use spark notebook for everything.
Python --> Apache Spark ( Data Lake ) --> Databricks Delta Lake ( Pyspark initially and subsequent use Spark SQL )
Also, Lakehouse = Fabric Warehouse = Power BI / SSAS Tabular Semantic Model
Thank you, Bhavesh Patel, for your response.
However, your answer seems to be outside the scope of our current setup. We are working entirely within a Microsoft Fabric environment.
The objective is to write data from a Microsoft Fabric Lakehouse to a Microsoft Fabric Warehouse using a Fabric Notebook. Databricks is not our scope.
The synapsesql function was working correctly, and according to Microsoft, it has not been deprecated.
Hi @bechirbenltaief,
If you have not changed anything so far, then I would also guess that the SAS key needs to be renewed.
Best regards
Thanks spaceman127 for your hint, but we are in OneLake in Fabric environnement, SAS Key is transparent layer.
All right,
That was not clear from the original question.
It seemed as if you were asking for a storage account.
That was a misunderstanding.
Do you have any more information for us?
Best regards
Just ruling simple things...has your SAS key expired by any chance?