Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
rblevi01
New Member

Fabric Spark Job Definition Create Lakehouse Table pyspark

Should df.write.format("delta").saveAsTable("test2") be executed from a Fabric Spark Job Definition?  Or, does it run on mutiple nodes and attempt to create the table many times?

 

I ask because if I execute the code below, the error is [TABLE_OR_VIEW_ALREADY_EXISTS].

 

I am sure the table does not exist in the Lakehouse associated with the Spark Job Definition.  

 

The eventual goal is to create a job that creates 1000s of tables from files.  If saveAsTable can only be run from a notebook, is there an alternative to create tables in a Lakehouse from a job?

 

He is the simple code that no matter the table name, will always return an error that it already exists.

 

from pyspark.sql import SparkSession

if __name__ == "__main__":

# Initialize a SparkSession
spark = SparkSession.builder.appName("TextToTable").getOrCreate()
df = spark.read.format("csv").option("header","true").load("Files/test.txt")

# df now is a Spark DataFrame containing CSV data from "Files/test.txt".
print(df.show())

# Create a new table   THIS IS WHERE ERROR HAPPENS
df.write.format("delta").saveAsTable("test2")

 

# Stop the SparkSession
spark.stop()

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @rblevi01 ,

Thanks for using Fabric Community.
Apologies for the issue you are facing. 

Please try to use the modified code - that you need to set the mode as overwrite.

 

 

df.write.mode('overwrite').format('delta').saveAsTable('new_table11')

 

 


I was able to execute Spark Job Definition without any issues.

Hope this is helpful. Please let me know incase of further queries.

View solution in original post

2 REPLIES 2
Anonymous
Not applicable

Hi @rblevi01 ,

Thanks for using Fabric Community.
Apologies for the issue you are facing. 

Please try to use the modified code - that you need to set the mode as overwrite.

 

 

df.write.mode('overwrite').format('delta').saveAsTable('new_table11')

 

 


I was able to execute Spark Job Definition without any issues.

Hope this is helpful. Please let me know incase of further queries.

Thank-you that worked.  Not sure why overwrite has to be used for a table that does not exist, however it works and I really appreciate the response.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.