March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hello,
I have a pipeline in where it reads data from my SQL Server and then write it in a lakehouse table.
I set a scheduled refresh for this table, but even though the refresh succeeds, the data is not being updated. I tried to do a manual refresh but it still did not work. I now realized that some of my tables (including the one I am testing) have a red cross in the lakehouse with the error message: An internal error has occurred while applying table changes to SQL.
I could not manage to find any information related to this error message, so now I am wondering if data is not updating because of this.
What does this error mean and what can I do to solve it?
Solved! Go to Solution.
@ilseeb - I find it odd that the Table was working was fine and then after a few days of runs it started to not load.
Can you clone the Pipeline and write the data to a new table, for testing? Furthermore, can you modify the source to only include records from before the first day you ran the table and then see if the new table works. Then run it again for the full dataset or whatever you are running now and see if the table breaks.
For the overwrite we noticed that the default "Overwrite" option was not actually overwriting things. There were some records deleted in the source that were still present in the Lakehouse after multiple job runs. I found a post in the Community where the Overwrite option was not doing everything that it did before.
The workaround (temporary one) was to use a Dynamic value of Overwrite for the Destination Action:
That seemed to work for us with the Schema and Data changes.
Jeremy
Hi Jeremy,
The table worked in the past, I checked it during the first few days after I created it and it was working fine.
I checked the source and destination types, everything is ok, there are no spaces in the column names of the lakehouse and I changed DataTime datatype to Date to make it simpler. It still did not work. I tried both a manual refresh and the automatic refresh but the table is still not udpating.
Could you please provide more details about the solution from the community regarding"Overwrite" functionality?
@ilseeb - I find it odd that the Table was working was fine and then after a few days of runs it started to not load.
Can you clone the Pipeline and write the data to a new table, for testing? Furthermore, can you modify the source to only include records from before the first day you ran the table and then see if the new table works. Then run it again for the full dataset or whatever you are running now and see if the table breaks.
For the overwrite we noticed that the default "Overwrite" option was not actually overwriting things. There were some records deleted in the source that were still present in the Lakehouse after multiple job runs. I found a post in the Community where the Overwrite option was not doing everything that it did before.
The workaround (temporary one) was to use a Dynamic value of Overwrite for the Destination Action:
That seemed to work for us with the Schema and Data changes.
Jeremy
I tried the Overwrite dynamic value and it looks like it's working now. I'll keep monitoring it today and tomorrow to see if the table is correctly updated after each run.
Why do you mention that it was a temporary fix?
@ilseeb -
I am glad to hear that the Overwrite resolved the issue!
I would expect that the default overwrite functionality will change/fixed in a future update. It has previously worked like a full overwrite before.
hi again,
I've noticed that since using the Overwrite dynamic value, a backup table is additionally saved everytime the pipeline runs. Do you also have this issue? Is there a way to get rid of them?
@ilseeb - Sorry for the delay, I have been traveling so I haven't kept up. I am not seeing that behavior on our side when using the overwrite. Are you able to provide a screenshot of what you are seeing?
no worries, I found another post were they used @string('OverwriteSchema') instead of Overwrite and this still worked and the backup tables are not being created anymore
@ilseeb - I was mistaken. I went back to one of the pipelines that we have now setup on triggered schedule and sure enough; I am seeing the back-up files now. I switched it to your solution of OverwriteSchema. I will see if they come back.
I'm also curious if, even though we shouldn't have to, convert the value to string vs just typing in like I did before.
Nice Catch and thanks for updating this with this solution. I'll let you know if the backup tables comes back.
@ilseeb - I put in the OverwriteSchema option and it looks like the data is being appended now in the Lakehouse and it is not a full overwrite. Are you seeing the same thing?
Guess another Microsoft ticket is in order.
no, in my case the tables are being overwritten, I had a quick look again to make sure but I don't have any duplicates and I've been using the OverwriteSchema for more than a week now.
Did you try it in just one table or more than one? Just to check if you might have an issue with just the one or all of them.
@ilseeb - It was a couple of tables. I switched back to the OOB overwrite setting last week and in 2 instances so far it looks like the default overwrite option is working properly again. My tables are being recreated, new data is showing up. However there are very infrequent schema changes so I don't know yet if it also overwrites the schema properly or not in those cases.
Did this table ever work/load properly?
I've run into similiar issues moving data into Lakehouses using the different methods and sources available to us. A few things I ran into that might be worth checking.
- Can you validate that all of the data types in the Source/Destination (if you can browse columns) are what they should be?
- Check for any spaces in the column names coming from the source; this should be less of an issue with Pipelines but spaces in column names on the Lakehouse side can be a problem
- Check DateTime columns or columns that should get converted to date time; the format could be off
Another thing on the Pipeline side, if you are doing an Overwrite there had been an issue where the traditional "overwrite" in the UI was not really overwriting the table. We had found a solution on the community where you can parameterize the Append/Overwrite option and set it to "Overwrite". Once we did that the tablees rebuilt properly.
Hopefully, this provides a starting point but let me know the outcome of the above and we can brainstorm some more.
Jeremy
Hi, do you happen to have a link to that solution in the community?
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
8 | |
4 | |
3 | |
2 | |
1 |
User | Count |
---|---|
14 | |
10 | |
9 | |
5 | |
4 |