Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
Ka13
Helper II
Helper II

Write_delta in Fabric Python Notebook Error

Hi ,
 

I had question in Fabric Python Notebook while using the write_deltalake was getting Error.
 
I want to write the Data from Pandas Dataframe in the Lakehouse Table - cust_test.

Can you please suggest.

Below is the Fabric Notebook code.



OSError: Generic MicrosoftAzure error: URL did not match any known pattern for scheme: abfss://<workspace-name>@onelake.dfs.fabric.microsoft.com/test.Lakehouse/Tables/cust_test



Fabric Notebook -

pip install deltalake


import pandas as pd
from deltalake import write_deltalake
from datetime import datetime, timezone
from notebookutils import notebook

data = {
    "CustomerID": [1, 2, 3],
    "Name": ["Alice", "Bob", "Charlie"],
    "Age" : [33,27,22]
}

df = pd.DataFrame(data)
print(df.info())



storage_options = {
    "bearer_token" : notebookutils.credentials.getToken('storage'),
    "use_fabric_endpoint": "true"
}



abfss_path = "abfss://<workspace-name>@onelake.dfs.fabric.microsoft.com/test.Lakehouse/Tables/cust_test"




write_deltalake(
    abfss_path,
    df,
    mode="append",
    storage_options = storage_options
)  
5 REPLIES 5
Asmita_27
Regular Visitor

Hi @Ka13 ,

 

You’re getting this error because the deltalake (delta‑rs) library does not support name‑based ABFSS paths in Fabric. It requires the Workspace ID and Lakehouse Item ID, not the workspace or lakehouse name.
You can get both IDs from URL. 

Eg. https://app.fabric.microsoft.com/groups/<workspace-id>/lakehouses/<lakehouse-id>?sparkUpgradeToFabric=1&experience=fabric-developer
The correct ABFSS format is:

abfss://<workspace-id>@onelake.dfs.fabric.microsoft.com/<lakehouse-id>.Lakehouse/Tables/cust_test
 
 

@Asmita_27 - thanks for your reply. How to get the workspace-id and Lakehouse-id in fabric python notebook? , to get the workspace-id and Lakehouse id and then pass it to url 

Hello @Ka13 I doubt you'd be able to import mssparkutils as it is a Spark utility and you're using a Python notebook. 

 

Here's a code you can use in Python notebook -

 

import sempy.fabric as fabric

def get_workspace_and_lakehouse_id(target_name="my_lakehouse"):
    # Workspace ID
    workspace_id = fabric.get_notebook_workspace_id()

    # List lakehouse items (your exact signature)
    items = fabric.list_items(type="Lakehouse", workspace=workspace_id)

    lakehouse_id = None

    # Loop through rows (your structure: it[0] = id, it[1] = name)
    for it in items.values:
        if it[1] == target_name:
            lakehouse_id = it[0]
            break

    return workspace_id, lakehouse_id

print(get_workspace_and_lakehouse_id())

 

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.

 

Hi,

To get the Workspace ID and Lakehouse ID in a Fabric notebook, you can list lakehouses and match by display name.

Code:

from notebookutils import mssparkutils

def get_lakehouse_ids_by_name(name: str):
    for lh in mssparkutils.lakehouse.list():
        if lh.get("displayName") == name:
            return lh.get("workspaceId"), lh.get("id")
    raise ValueError(f"Lakehouse named '{name}' not found.")

 

workspace_id, lakehouse_id = get_lakehouse_ids_by_name("Write_Delta")
print("Workspace ID:", workspace_id)
print("Lakehouse ID:", lakehouse_id)
deborshi_nag
Solution Sage
Solution Sage

Hello @Ka13 

 

If you don't need to write in Delta format you can simply write your pandas dataset using the "File API Path". Add the lakehouse as a Data item in your Python notebook first. In the /Files section of the lakehouse, click the three dots on the folder you want to write to, and from the context menu copy the File API Path option. You can simply use that with Pandas to_csv method to write the dataset into the folder. 

 

df.to_csv('/lakehouse/default/Files/pandas_data/customers.csv', index=False, encoding="utf-8")

 

For writing in delta file format it's best to use a Spark notebook and the abfs file path, because Delta is a Spark-native storage format! 

 

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.