Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the OneLake & Platform Admin teams for an ask US anything on July 16th. Join now.

Reply
juanisivo
Frequent Visitor

Loading CSV table from notebook resources to a delta table in a lakehouse using code snippet

Hello, I am using a code snippet to load a CSV to a table. The CSV is tored in the built in resource folder of my notebook and the target table is in a lakehouse already linked to my notebook. This is the code:

 

# Starts a load table operation in a Lakehouse artifact
notebookutils.lakehouse.loadTable(
    {
        "relativePath": './builtin/log/bronze_to_silver_log.csv', # path of the csv in the built in resources folder of the notebook
        "pathType": "File",
        "mode": "Append",
        "recursive": False,
        "formatOptions": {
            "format": "Csv",
            "header": True,
            "delimiter": ","
        }
    },
    'bronze_to_silver_log', # the name of the table
    'silver', # the name of the lakehouse
    workspaceId={workspace_id}
)
 
But I am getting the following error:
 
Py4JError: An error occurred while calling z:notebookutils.lakehouse.loadTable. Trace: py4j.Py4JException: Method loadTable([class java.lang.String, class java.lang.String, class java.lang.String, class java.util.HashSet]) does not exist at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:321) at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:342) at py4j.Gateway.invoke(Gateway.java:276) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.base/java.lang.Thread.run(Thread.java:829)
 
Seems that the loadTable method doesn't exist.
Any guess?
1 ACCEPTED SOLUTION
v-prasare
Community Support
Community Support

Hi @juanisivo,

 

Replace the notebookutils.lakehouse.loadTable block with standard PySpark code using .read() and .saveAsTable() — this is the official, stable, and Fabric-supported approach for loading data from a CSV file to a Lakehouse table.

Microsoft recommends using PySpark APIs in Fabric Notebooks for reading/writing data to Lakehouse tables. The method notebookutils.lakehouse.loadTable() is not part of the documented, supported APIs and is likely either an internal or deprecated utility.

 

You can use PySpark to load data from CSV, Parquet, JSON, and other file formats into a lakehouse. You can also create tables directly from these DataFrames.

ex:

df = spark.read.option("header", True).csv("Files/YourFolder/yourfile.csv")
df.write.mode("overwrite").saveAsTable("lakehouse_name.table_name")

 

 

Thanks,

Prashanth Are

MS Fabric community support

 

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query

 

View solution in original post

4 REPLIES 4
v-prasare
Community Support
Community Support

@juanisivo,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?

 

 

 

Thanks,

Prashanth Are

MS Fabric community support

 

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query

v-prasare
Community Support
Community Support

Hi @juanisivo,

 

Replace the notebookutils.lakehouse.loadTable block with standard PySpark code using .read() and .saveAsTable() — this is the official, stable, and Fabric-supported approach for loading data from a CSV file to a Lakehouse table.

Microsoft recommends using PySpark APIs in Fabric Notebooks for reading/writing data to Lakehouse tables. The method notebookutils.lakehouse.loadTable() is not part of the documented, supported APIs and is likely either an internal or deprecated utility.

 

You can use PySpark to load data from CSV, Parquet, JSON, and other file formats into a lakehouse. You can also create tables directly from these DataFrames.

ex:

df = spark.read.option("header", True).csv("Files/YourFolder/yourfile.csv")
df.write.mode("overwrite").saveAsTable("lakehouse_name.table_name")

 

 

Thanks,

Prashanth Are

MS Fabric community support

 

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query

 

Srisakthi
Continued Contributor
Continued Contributor

Hi @juanisivo ,

 

There is no such method available. 

As per the article, You can use relative paths like builtin/YourData.txt for quick exploration. The notebookutils.nbResPath method helps you compose the full path. You can spark to read from the relative path and write to table.

Refer - https://learn.microsoft.com/en-us/fabric/data-engineering/how-to-use-notebook#notebook-resources

 

Regards,

Srisakthi

 

If this response helps you, please "Accept as solution" and give "Kudos". It can helps others.

Hi @Srisakthi,

 

I don't understand why you say that there is no such method. The method does exit as shown in the attached image. It is part of the lakehouse help.

And not only that, it is also used in a built-in code snippet called "Load table" that "starts a load table operation in a Lakehouse artifact".

 

Do you suggest another solution to copy a csv file from the notebook resources to a table in a lakehouse?

 

Screenshot 2025-05-13 084050.png

 

Best regards,

Juan

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors