Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.

Reply
michael_muell
New Member

UDF Connection to Lakehouse does not work

Hi all, 

 

I'm trying to get a UDF to work that creates a file in the lakehouse. I saw that there is a sample that almost achieves what I want. Unfortunately it does not work when I connect my lakehouse. I get the following error: 

Running another sample snippet connecting with a warehouse item works perfectly fine. It seems to be some problem specifically with the lakehouse connection.

 

Can anyone help?  

{
"functionName": "write_csv_file_in_lakehouse",
"invocationId": "00000000-0000-0000-0000-000000000000",
"status": "Failed",
"errors": [
{
"errorCode": "WorkloadException",
"subErrorCode": "NotFound",
"message": "User data function: 'write_csv_file_in_lakehouse' invocation failed."
}
]
}

The connection to the datasource is setup: 

michael_muell_1-1752253080383.png

 


This is the code I'm running: 

import datetime
import fabric.functions as fn
import logging

udf = fn.UserDataFunctions()

import pandas as pd
import datetime
# Select 'Manage connections' and add a connection to a Lakehouse.
#Replace the alias "<My Lakehouse alias>" with your connection alias.
@udf.connection(argName="myLakehouse", alias="dateiupload")
@udf.function()
def write_csv_file_in_lakehouse(myLakehouse: fn.FabricLakehouseClient, employees: list)-> str:

    logging.info('test')
    '''
    Description: Write employee data to lakehouse as timestamped CSV file using pandas.
   
    Args:
        myLakehouse (fn.FabricLakehouseClient): Fabric lakehouse connection.
        employees (list): List of employee records as [ID, Name, DeptID] arrays.
   
    Returns:
        str: Confirmation message with filename and viewing instructions.
       
    Example:
        employees = [[1,"John Smith", 31], [2,"Kayla Jones", 33]]
        Creates "Employees1672531200.csv" in lakehouse
    '''
   
    csvFileName = "Employees" + str(round(datetime.datetime.now().timestamp())) + ".csv"
       
    # Convert the data to a DataFrame
    df = pd.DataFrame(employees, columns=['ID','EmpName', 'DepID'])
    # Write the DataFrame to a CSV file
    csv_string = df.to_csv(index=False)
       
    # Upload the CSV file to the Lakehouse
    connection = myLakehouse.connectToFiles()
    csvFile = connection.get_file_client(csvFileName)  
   
    csvFile.upload_data(csv_string, overwrite=True)

    csvFile.close()
    connection.close()
    return f"File {csvFileName} was written to the Lakehouse. Open the Lakehouse in https://app.fabric.microsoft.com to view the files"
1 ACCEPTED SOLUTION
v-venuppu
Community Support
Community Support

Hi @michael_muell ,

Please use the below code:

import pandas as pd

import datetime

import fabric.functions as fn

import logging

 

udf = fn.UserDataFunctions()

 

@udf.connection(argName="myLakehouse", alias="LH123")

@udf.function()

def write_csv_file_in_lakehouse(myLakehouse: fn.FabricLakehouseClient, employees: list) -> str:

    """

    Writes employee data to Lakehouse Files as a CSV.

    """

   

    logging.info("Starting CSV file write to Lakehouse")

 

    # Create timestamped filename

    csvFileName = "Employees_" + str(round(datetime.datetime.now().timestamp())) + ".csv"

   

    # Create DataFrame and CSV string

    df = pd.DataFrame(employees, columns=["ID", "EmpName", "DepID"])

    csv_string = df.to_csv(index=False)

    csv_bytes = csv_string.encode("utf-8")  # Convert string to bytes

 

    # Connect to Lakehouse Files and upload

    connection = myLakehouse.connectToFiles()

    file_client = connection.get_file_client(csvFileName)

    file_client.upload_data(csv_bytes, overwrite=True)

 

    # Close connections

    file_client.close()

    connection.close()

 

    return f"File '{csvFileName}' was uploaded successfully."

 

Add Pandas library as shown below:

 

vvenuppu_0-1752812737726.png

To test this I have created pipeline and it worked for me:

 

vvenuppu_1-1752812837093.png

File got created in Lakehouse as shown below:

 

vvenuppu_2-1752812904352.png

 

Thank you.

 

View solution in original post

5 REPLIES 5
v-venuppu
Community Support
Community Support

Hi @michael_muell ,

Please use the below code:

import pandas as pd

import datetime

import fabric.functions as fn

import logging

 

udf = fn.UserDataFunctions()

 

@udf.connection(argName="myLakehouse", alias="LH123")

@udf.function()

def write_csv_file_in_lakehouse(myLakehouse: fn.FabricLakehouseClient, employees: list) -> str:

    """

    Writes employee data to Lakehouse Files as a CSV.

    """

   

    logging.info("Starting CSV file write to Lakehouse")

 

    # Create timestamped filename

    csvFileName = "Employees_" + str(round(datetime.datetime.now().timestamp())) + ".csv"

   

    # Create DataFrame and CSV string

    df = pd.DataFrame(employees, columns=["ID", "EmpName", "DepID"])

    csv_string = df.to_csv(index=False)

    csv_bytes = csv_string.encode("utf-8")  # Convert string to bytes

 

    # Connect to Lakehouse Files and upload

    connection = myLakehouse.connectToFiles()

    file_client = connection.get_file_client(csvFileName)

    file_client.upload_data(csv_bytes, overwrite=True)

 

    # Close connections

    file_client.close()

    connection.close()

 

    return f"File '{csvFileName}' was uploaded successfully."

 

Add Pandas library as shown below:

 

vvenuppu_0-1752812737726.png

To test this I have created pipeline and it worked for me:

 

vvenuppu_1-1752812837093.png

File got created in Lakehouse as shown below:

 

vvenuppu_2-1752812904352.png

 

Thank you.

 

This works! Thanks a lot! 

BhaveshPatel
Community Champion
Community Champion

This is how it works:
Python (Pandas ) -- > Apache Spark ( Data Lake ) --> Delta Lake ( Data Lakehouse ) 

  # Convert the data to a DataFrame
    df = pd.DataFrame(employees, columns=['ID','EmpName''DepID'])
# Covert DataFrame to Data Lake
sdf = spark.createDataFrame(df)
or 
sdf = spark.read.csv("Employees_" + str(round(datetime.datetime.now().timestamp())) + ".csv",header = True)

 

# Writing a DataFrame to Delta format
sdf.write.format("delta").option("overwrite").saveAsTable("DimEmployees")

 

You are missing Spark Dataframe ( Data Lake ).

Thanks & Regards,
Bhavesh

Love the Self Service BI.
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to give Kudos.
v-venuppu
Community Support
Community Support

Hi @michael_muell ,

Thank you for reaching out to Microsoft Fabric Community.

The issue is likely caused by a mismatch in the Lakehouse connection alias or trying to upload string data using the wrong method. Make sure your UDF connection alias (dateiupload) matches exactly what’s set in the UDF UI. Also, replace upload_data with upload_text since you're uploading a CSV string, not binary data. Here's the fix:
csvFile.upload_text(csv_string, overwrite=True)

 

Here's a documentation for your reference:
fabric.functions.FabricLakehouseClient class | Microsoft Learn

 

Thank you.

 

Hi @v-venuppu 
Thanks for the reply. I triple checked and the alias is exactly matching. 
Also the code change did not help. The problem is the connection to the lakehouse. 

michael_muell_0-1752560660545.png

 

Any other suggestions?

Best regards 
Michael

 

Helpful resources

Announcements
September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors