Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.

Reply
alozovoy
Advocate II
Advocate II

System Created Backup Tables

Occasionally I end up with a system created backup table in my Fabric Lakehouse. For example, if my table is "customers", I also see a table named "customers_backup51c54568_b46b_40f9_8db2_800abb09622f". It appears to be a one-time backup. It does not get updated after it is initially created, even if the actual table does.

 

This happens more frequently to the larger tables in my lakehouse (10,000,000+ rows). It also sees to happen to tables imported to the lakehouse using a Copy pipeline activity from a SQL server.

 

When I see these backup tables, I manually delete them.

 

Does anyone else see this happening in their lakehouse? Is there a reason why these get created?

1 ACCEPTED SOLUTION
Anonymous
Not applicable

HI @alozovoy,

I thin they are staging tables use to copied from the source data store to the staging storage.

You can take a look at the following document to know more about this feature and how it works:

Copy activity performance optimization features - Azure Data Factory & Azure Synapse | Microsoft Lea...

Regards,

Xiaoxin Sheng

View solution in original post

7 REPLIES 7
jpelham
Advocate I
Advocate I

This has recently started happening in one of my lakehouses in the past week. I'm now up to 6 extra days worth of tables. I cheked the enable staging setting in the copy data activity and the box is not checked, so I don't think that this should be occurring. Like the poster, these are tables being loaded from a SQL server.

hello, I started facing the same issue.

Did you find a solution?

At first I was just deleting all of the tables that contained the word backup in their name using a notebook. I would run this every few days rather than every day. In my situation this was occuring a lakehouse and pipeline where we were doing testing, so it was not critical to keep it running. I ended up setting up a new pipeline and lakehouse and have not had any further issues. 

 

If deleting the lakehouse is not an option, I would suggest trying to replace the pipeline with a new version. If that is not an option, running a notebook to delete the backup tables is going to be the easiest way to mass delete them. 

Hi, I used @string('OverwriteSchema') instead of Overwrite and this still worked and the backup tables are not being created anymore 🙂

Thanks for the suggestion. That is one thing we tried, but in our case it didn't resolve the issue. I am glad it worked for you! Since I deleted that lakehouse and created a new pipeline, we have not had the issue return in any of our lakehouses. Hopefully it stays that way 🙂

Anonymous
Not applicable

HI @alozovoy,

I thin they are staging tables use to copied from the source data store to the staging storage.

You can take a look at the following document to know more about this feature and how it works:

Copy activity performance optimization features - Azure Data Factory & Azure Synapse | Microsoft Lea...

Regards,

Xiaoxin Sheng

The OverwriteSchema instead of Overwrite solved this for me. I was using just 'Overwrite' in my metadata table before, then I checked the JSON code of the pipeline and there it was the overwrite option described as 'OverwriteSchema'.

Helpful resources

Announcements
August Fabric Update Carousel

Fabric Monthly Update - August 2025

Check out the August 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Kudoed Authors