Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
DebbieE
Community Champion
Community Champion

Saving using abfss path and delta parquet being pushed to unidentified Folder

We have recently been working with microsoft on best practice for notebooks and have followed advice

 

Test 1 is on a lakehouse without schema enabled and this appears to be working fine

Test 2 is on a lakehouse WITH schema enabled and this is where the problem is still happening.

 

So we set up getting the ids for the workspace and lakehouse and then build the adbss path which prints out the correct path for the file

schema_name="fwk"

params_table = "FWK_Pipeline_Parameters"
 
params_src_path = f"abfss://{dataeng_workspace_id}@onelake.dfs.fabric.microsoft.com/{dataeng_lakehouse_id}/Tables/{schema_name}.{params_table}"
print(pipeline_parameters_src_path )
This prints out to the correct path
 
we then create the data frame with some data in, and finally save as delta parquet
 
dfparams.write \
  .format("delta")\
  .mode("overwrite") \
  .option("overwriteSchema", "true") \
  .save(f"{params_src_path}")

print(f"succefully overwritten to {pipeline_parameters_src_path}")
 
but as you can see it just goes into an Unidentified folder
unidentifiedFolder.png
The only difference between this and the working one is the schema
 
Can anyone help me get past this? I dont want to have to go backwards to a Lakehouse that isnt schema enabled?
 
1 ACCEPTED SOLUTION
apturlov
Responsive Resident
Responsive Resident

Hi @DebbieE. If you want to save a dataframe as a managed delta table into a Lakehouse you should use 

.saveAsTable

instead of .save and use a two-parts naming convention for the target table {schema_name}.{table_name}. See example here:

(
    dbfparams
    .write
    .format("delta")
    .mode("overwrite")
    .option("overwriteSchema", "true)
    .saveAsTable("fwk.FWK_Pipeline_Parameters")
)

 

By going this way you'll avoid a dependency on workspace id and Lakehouse id and your code will be much easier deployable.

I also strongly suggest to only use schema-enabled Lakehouses, as not schema-enabled will be deprecated at some point.

 

If you still insist on using an abfss path you can try this approach:

mount_path = mssparkutils.fs.getMountPath(lakehouse_name)

# Construct ABFSS path dynamically
if schema_name:
    abfss_path = f"{mount_path}/Tables/{schema_name}/{table_name}"
else:
    abfss_path = f"{mount_path}/Tables/{table_name}"

 

If you find this answer useful or solving your problem please consider giving it kudos and/or marking it as a solution.

 

View solution in original post

3 REPLIES 3
apturlov
Responsive Resident
Responsive Resident

@DebbieE thanks for your feedback and accepting my answer as a solution. You are correct that in abfss path you should not use a dot . in the path but a slash /. You can use a two part name with the dot in the .saveAsTable function.

apturlov
Responsive Resident
Responsive Resident

Hi @DebbieE. If you want to save a dataframe as a managed delta table into a Lakehouse you should use 

.saveAsTable

instead of .save and use a two-parts naming convention for the target table {schema_name}.{table_name}. See example here:

(
    dbfparams
    .write
    .format("delta")
    .mode("overwrite")
    .option("overwriteSchema", "true)
    .saveAsTable("fwk.FWK_Pipeline_Parameters")
)

 

By going this way you'll avoid a dependency on workspace id and Lakehouse id and your code will be much easier deployable.

I also strongly suggest to only use schema-enabled Lakehouses, as not schema-enabled will be deprecated at some point.

 

If you still insist on using an abfss path you can try this approach:

mount_path = mssparkutils.fs.getMountPath(lakehouse_name)

# Construct ABFSS path dynamically
if schema_name:
    abfss_path = f"{mount_path}/Tables/{schema_name}/{table_name}"
else:
    abfss_path = f"{mount_path}/Tables/{table_name}"

 

If you find this answer useful or solving your problem please consider giving it kudos and/or marking it as a solution.

 

DebbieE
Community Champion
Community Champion

unfortunately that is not the advice that Microsoft have given us. Microsoft have specifically told us that the best practice is to use the abfss path.  So Im not 'insisting', I'm simply trying to work with the best practice I have been told to do from workshops. 

 

Unfortunately this code doesnt seem to work Py4JJavaError: An error occurred while calling z:notebookutils.fs.getMountPath. : java.io.FileNotFoundException: The mount path /synfs/notebook/#########################/framework_lh doesn't exist, please check if you pass the right mount point.

 

So this is the path i have been using f"abfss://{dataeng_workspace_id}@onelake.dfs.fabric.microsoft.com/{dataeng_lakehouse_id}/Tables/{schema_name}.{params_table}"

 

I thought I would try {schema_name}/{table_name} from their end of your line of code so I end up with this at the end Tables/fwk/FWK_Pipeline_Parameters (And Im still using the code given to me by Microsoft so I havent changed too much. 

 

and it has worked so great. Thank you. all that was wrong it seems was the . between schema and table. All sorted

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.