Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Should df.write.format("delta").saveAsTable("test2") be executed from a Fabric Spark Job Definition? Or, does it run on mutiple nodes and attempt to create the table many times?
I ask because if I execute the code below, the error is [TABLE_OR_VIEW_ALREADY_EXISTS].
I am sure the table does not exist in the Lakehouse associated with the Spark Job Definition.
The eventual goal is to create a job that creates 1000s of tables from files. If saveAsTable can only be run from a notebook, is there an alternative to create tables in a Lakehouse from a job?
He is the simple code that no matter the table name, will always return an error that it already exists.
from pyspark.sql import SparkSession
if __name__ == "__main__":
# Initialize a SparkSession
spark = SparkSession.builder.appName("TextToTable").getOrCreate()
df = spark.read.format("csv").option("header","true").load("Files/test.txt")
# df now is a Spark DataFrame containing CSV data from "Files/test.txt".
print(df.show())
# Create a new table THIS IS WHERE ERROR HAPPENS
df.write.format("delta").saveAsTable("test2")
# Stop the SparkSession
spark.stop()
Solved! Go to Solution.
Hi @rblevi01 ,
Thanks for using Fabric Community.
Apologies for the issue you are facing.
Please try to use the modified code - that you need to set the mode as overwrite.
df.write.mode('overwrite').format('delta').saveAsTable('new_table11')
I was able to execute Spark Job Definition without any issues.
Hope this is helpful. Please let me know incase of further queries.
Hi @rblevi01 ,
Thanks for using Fabric Community.
Apologies for the issue you are facing.
Please try to use the modified code - that you need to set the mode as overwrite.
df.write.mode('overwrite').format('delta').saveAsTable('new_table11')
I was able to execute Spark Job Definition without any issues.
Hope this is helpful. Please let me know incase of further queries.
Thank-you that worked. Not sure why overwrite has to be used for a table that does not exist, however it works and I really appreciate the response.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
7 | |
4 | |
4 | |
3 | |
3 |