Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric certified for FREE! Don't miss your chance! Learn more
We have recently been working with microsoft on best practice for notebooks and have followed advice
Test 1 is on a lakehouse without schema enabled and this appears to be working fine
Test 2 is on a lakehouse WITH schema enabled and this is where the problem is still happening.
So we set up getting the ids for the workspace and lakehouse and then build the adbss path which prints out the correct path for the file
schema_name="fwk"
Solved! Go to Solution.
Hi @DebbieE. If you want to save a dataframe as a managed delta table into a Lakehouse you should use
.saveAsTableinstead of .save and use a two-parts naming convention for the target table {schema_name}.{table_name}. See example here:
(
dbfparams
.write
.format("delta")
.mode("overwrite")
.option("overwriteSchema", "true)
.saveAsTable("fwk.FWK_Pipeline_Parameters")
)
By going this way you'll avoid a dependency on workspace id and Lakehouse id and your code will be much easier deployable.
I also strongly suggest to only use schema-enabled Lakehouses, as not schema-enabled will be deprecated at some point.
If you still insist on using an abfss path you can try this approach:
mount_path = mssparkutils.fs.getMountPath(lakehouse_name)
# Construct ABFSS path dynamically
if schema_name:
abfss_path = f"{mount_path}/Tables/{schema_name}/{table_name}"
else:
abfss_path = f"{mount_path}/Tables/{table_name}"
If you find this answer useful or solving your problem please consider giving it kudos and/or marking it as a solution.
@DebbieE thanks for your feedback and accepting my answer as a solution. You are correct that in abfss path you should not use a dot . in the path but a slash /. You can use a two part name with the dot in the .saveAsTable function.
Hi @DebbieE. If you want to save a dataframe as a managed delta table into a Lakehouse you should use
.saveAsTableinstead of .save and use a two-parts naming convention for the target table {schema_name}.{table_name}. See example here:
(
dbfparams
.write
.format("delta")
.mode("overwrite")
.option("overwriteSchema", "true)
.saveAsTable("fwk.FWK_Pipeline_Parameters")
)
By going this way you'll avoid a dependency on workspace id and Lakehouse id and your code will be much easier deployable.
I also strongly suggest to only use schema-enabled Lakehouses, as not schema-enabled will be deprecated at some point.
If you still insist on using an abfss path you can try this approach:
mount_path = mssparkutils.fs.getMountPath(lakehouse_name)
# Construct ABFSS path dynamically
if schema_name:
abfss_path = f"{mount_path}/Tables/{schema_name}/{table_name}"
else:
abfss_path = f"{mount_path}/Tables/{table_name}"
If you find this answer useful or solving your problem please consider giving it kudos and/or marking it as a solution.
unfortunately that is not the advice that Microsoft have given us. Microsoft have specifically told us that the best practice is to use the abfss path. So Im not 'insisting', I'm simply trying to work with the best practice I have been told to do from workshops.
Unfortunately this code doesnt seem to work Py4JJavaError: An error occurred while calling z:notebookutils.fs.getMountPath. : java.io.FileNotFoundException: The mount path /synfs/notebook/#########################/framework_lh doesn't exist, please check if you pass the right mount point.
So this is the path i have been using f"abfss://{dataeng_workspace_id}@onelake.dfs.fabric.microsoft.com/{dataeng_lakehouse_id}/Tables/{schema_name}.{params_table}"
I thought I would try {schema_name}/{table_name} from their end of your line of code so I end up with this at the end Tables/fwk/FWK_Pipeline_Parameters (And Im still using the code given to me by Microsoft so I havent changed too much.
and it has worked so great. Thank you. all that was wrong it seems was the . between schema and table. All sorted
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
Check out the January 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 24 | |
| 4 | |
| 3 | |
| 3 | |
| 2 |
| User | Count |
|---|---|
| 59 | |
| 13 | |
| 10 | |
| 7 | |
| 7 |