Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
fabric is giving error when loading txt file with pole separated to delta table. any idea how to load such files?
Solved! Go to Solution.
Hello @akkibuddy7
You cannot use load to table option with | delimited files. That option is for csv
you have to use spark notebook to load to tables
df = spark.read.format("csv") \
.option("header", "true") \ # Set to 'false' if no header row exists
.option("delimiter", "|") \
.load("/path/to/your/file.txt")
df.write.format("delta").mode("overwrite").saveAsTable("table_name")
If this is helpful please accept the answer and give kudos
Hello @akkibuddy7
You cannot use load to table option with | delimited files. That option is for csv
you have to use spark notebook to load to tables
df = spark.read.format("csv") \
.option("header", "true") \ # Set to 'false' if no header row exists
.option("delimiter", "|") \
.load("/path/to/your/file.txt")
df.write.format("delta").mode("overwrite").saveAsTable("table_name")
If this is helpful please accept the answer and give kudos
is there any approch for incremental data? if S3 txt gets updated , will have to again run the dataframe to pull updated table and that's bit tedious job.
Realized few min back the same appoch ! but your code solves the issue Thanks again!
User | Count |
---|---|
4 | |
4 | |
2 | |
2 | |
2 |
User | Count |
---|---|
10 | |
8 | |
6 | |
6 | |
5 |