Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello, I am using a code snippet to load a CSV to a table. The CSV is tored in the built in resource folder of my notebook and the target table is in a lakehouse already linked to my notebook. This is the code:
# Starts a load table operation in a Lakehouse artifact
notebookutils.lakehouse.loadTable(
{
"relativePath": './builtin/log/bronze_to_silver_log.csv', # path of the csv in the built in resources folder of the notebook
"pathType": "File",
"mode": "Append",
"recursive": False,
"formatOptions": {
"format": "Csv",
"header": True,
"delimiter": ","
}
},
'bronze_to_silver_log', # the name of the table
'silver', # the name of the lakehouse
workspaceId={workspace_id}
)
Solved! Go to Solution.
Hi @juanisivo,
Replace the notebookutils.lakehouse.loadTable block with standard PySpark code using .read() and .saveAsTable() — this is the official, stable, and Fabric-supported approach for loading data from a CSV file to a Lakehouse table.
Microsoft recommends using PySpark APIs in Fabric Notebooks for reading/writing data to Lakehouse tables. The method notebookutils.lakehouse.loadTable() is not part of the documented, supported APIs and is likely either an internal or deprecated utility.
You can use PySpark to load data from CSV, Parquet, JSON, and other file formats into a lakehouse. You can also create tables directly from these DataFrames.
ex:
df = spark.read.option("header", True).csv("Files/YourFolder/yourfile.csv")
df.write.mode("overwrite").saveAsTable("lakehouse_name.table_name")
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
@juanisivo,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
Hi @juanisivo,
Replace the notebookutils.lakehouse.loadTable block with standard PySpark code using .read() and .saveAsTable() — this is the official, stable, and Fabric-supported approach for loading data from a CSV file to a Lakehouse table.
Microsoft recommends using PySpark APIs in Fabric Notebooks for reading/writing data to Lakehouse tables. The method notebookutils.lakehouse.loadTable() is not part of the documented, supported APIs and is likely either an internal or deprecated utility.
You can use PySpark to load data from CSV, Parquet, JSON, and other file formats into a lakehouse. You can also create tables directly from these DataFrames.
ex:
df = spark.read.option("header", True).csv("Files/YourFolder/yourfile.csv")
df.write.mode("overwrite").saveAsTable("lakehouse_name.table_name")
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
Hi @juanisivo ,
There is no such method available.
As per the article, You can use relative paths like builtin/YourData.txt for quick exploration. The notebookutils.nbResPath method helps you compose the full path. You can spark to read from the relative path and write to table.
Refer - https://learn.microsoft.com/en-us/fabric/data-engineering/how-to-use-notebook#notebook-resources
Regards,
Srisakthi
If this response helps you, please "Accept as solution" and give "Kudos". It can helps others.
Hi @Srisakthi,
I don't understand why you say that there is no such method. The method does exit as shown in the attached image. It is part of the lakehouse help.
And not only that, it is also used in a built-in code snippet called "Load table" that "starts a load table operation in a Lakehouse artifact".
Do you suggest another solution to copy a csv file from the notebook resources to a table in a lakehouse?
Best regards,
Juan
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
5 | |
4 | |
3 | |
2 | |
2 |