Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedJoin us at the 2025 Microsoft Fabric Community Conference. March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for $400 discount. Register now
Occasionally I end up with a system created backup table in my Fabric Lakehouse. For example, if my table is "customers", I also see a table named "customers_backup51c54568_b46b_40f9_8db2_800abb09622f". It appears to be a one-time backup. It does not get updated after it is initially created, even if the actual table does.
This happens more frequently to the larger tables in my lakehouse (10,000,000+ rows). It also sees to happen to tables imported to the lakehouse using a Copy pipeline activity from a SQL server.
When I see these backup tables, I manually delete them.
Does anyone else see this happening in their lakehouse? Is there a reason why these get created?
Solved! Go to Solution.
HI @alozovoy,
I thin they are staging tables use to copied from the source data store to the staging storage.
You can take a look at the following document to know more about this feature and how it works:
Regards,
Xiaoxin Sheng
This has recently started happening in one of my lakehouses in the past week. I'm now up to 6 extra days worth of tables. I cheked the enable staging setting in the copy data activity and the box is not checked, so I don't think that this should be occurring. Like the poster, these are tables being loaded from a SQL server.
hello, I started facing the same issue.
Did you find a solution?
At first I was just deleting all of the tables that contained the word backup in their name using a notebook. I would run this every few days rather than every day. In my situation this was occuring a lakehouse and pipeline where we were doing testing, so it was not critical to keep it running. I ended up setting up a new pipeline and lakehouse and have not had any further issues.
If deleting the lakehouse is not an option, I would suggest trying to replace the pipeline with a new version. If that is not an option, running a notebook to delete the backup tables is going to be the easiest way to mass delete them.
Hi, I used @string('OverwriteSchema') instead of Overwrite and this still worked and the backup tables are not being created anymore 🙂
Thanks for the suggestion. That is one thing we tried, but in our case it didn't resolve the issue. I am glad it worked for you! Since I deleted that lakehouse and created a new pipeline, we have not had the issue return in any of our lakehouses. Hopefully it stays that way 🙂
HI @alozovoy,
I thin they are staging tables use to copied from the source data store to the staging storage.
You can take a look at the following document to know more about this feature and how it works:
Regards,
Xiaoxin Sheng
The OverwriteSchema instead of Overwrite solved this for me. I was using just 'Overwrite' in my metadata table before, then I checked the JSON code of the pipeline and there it was the overwrite option described as 'OverwriteSchema'.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Fabric update to learn about new features.
User | Count |
---|---|
36 | |
3 | |
3 | |
2 | |
1 |