Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
We use copy jobs to copy data from our fabric lakehouse into a sql on prem. These copy jobs are set to upsert the data. We noticed that when deleting a table on the sql on prem, then run the copy job again: the table gets created but no new data is added. Even after triggering multiple times the copy job, no new data gets added.
The strange part is that initially the copy jobs were created (using a script) then they ran the first time and the tables were created in the sql on prem. Can somebody help to find a solution?
Solved! Go to Solution.
Hello @JVL2
You are hitting a situation where the Copy Job’s upsert mechanism seems to be tied to metadata or state information that isn’t getting reset when the on-prem SQL table is dropped and recreated. Even though the pipeline recreates the table, it appears to detect no “new” rows to insert. A few approaches can help:
1. Recreate or delete it to reset the Copy Job state
3. Delete and rebuild the Copy Job (or create a brand-new Copy Job) so that any watermarks or incremental checkpoints are cleared and it does a fresh load.
2. Force a full load at least once
If the Copy Job was configured for incremental loading, temporarily switch it to a full load so it can repopulate the on-prem table. Then revert to incremental upsert after that run.
hope this helps
Hello @JVL2
You are hitting a situation where the Copy Job’s upsert mechanism seems to be tied to metadata or state information that isn’t getting reset when the on-prem SQL table is dropped and recreated. Even though the pipeline recreates the table, it appears to detect no “new” rows to insert. A few approaches can help:
1. Recreate or delete it to reset the Copy Job state
3. Delete and rebuild the Copy Job (or create a brand-new Copy Job) so that any watermarks or incremental checkpoints are cleared and it does a fresh load.
2. Force a full load at least once
If the Copy Job was configured for incremental loading, temporarily switch it to a full load so it can repopulate the on-prem table. Then revert to incremental upsert after that run.
hope this helps
Hi @JVL2 ,
Thanks for posting in Microsoft Fabric Community.
The issue might be caused by the copy job's upsert mechanism, which relies on metadata and schema mappings to synchronize data between the Fabric Lakehouse and the on-prem SQL database. When the SQL table was deleted, it is possible that the metadata was disrupted, causing the copy job to recreate the table but not insert any new data.
1. Verify Source Data: Ensure the source data contains changes or new records. If there are no changes, the upsert operation won’t insert data.
2. Check Staging Tables (if applicable): If staging tables are used, ensure they are properly configured and cleared before running the job.
3. Recreate the Copy Job: Recreate the Copy job using the original script to reset the metadata and schema mapping.
Hope these help. Please reach out for any assistance .
If this post helps, please consider accepting as solution to help others find easily and a kudos would be appreciated.
Best regards,
Vinay
The sole solution was indeed to remove the copy jobs and recreate them. thanks!
Agree. They should add a force full reload flag on the incremental loads. This is annoying when you have alot tables. UI is clinky and a bit slow.
Apparently the product team is currently working on this feature
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
50 | |
27 | |
14 | |
12 | |
4 |
User | Count |
---|---|
65 | |
58 | |
23 | |
8 | |
7 |