Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
fabric is giving error when loading txt file with pole separated to delta table. any idea how to load such files?
Solved! Go to Solution.
Hello @akkibuddy7
You cannot use load to table option with | delimited files. That option is for csv
you have to use spark notebook to load to tables
df = spark.read.format("csv") \
.option("header", "true") \ # Set to 'false' if no header row exists
.option("delimiter", "|") \
.load("/path/to/your/file.txt")
df.write.format("delta").mode("overwrite").saveAsTable("table_name")
If this is helpful please accept the answer and give kudos
Hello @akkibuddy7
You cannot use load to table option with | delimited files. That option is for csv
you have to use spark notebook to load to tables
df = spark.read.format("csv") \
.option("header", "true") \ # Set to 'false' if no header row exists
.option("delimiter", "|") \
.load("/path/to/your/file.txt")
df.write.format("delta").mode("overwrite").saveAsTable("table_name")
If this is helpful please accept the answer and give kudos
is there any approch for incremental data? if S3 txt gets updated , will have to again run the dataframe to pull updated table and that's bit tedious job.
Realized few min back the same appoch ! but your code solves the issue Thanks again!