Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
Hi All,
I'd be grateful for any light anyone can shed on the following persistent issue 🙏
Description of Issue:
I'm experiencing consistent failures when using Copy Data activity in Fabric pipelines to write data from an Azure SQL Managed Instance to a Lakehouse Delta table.
The pipeline runs for an extended time and then fails. After failure:
The Error returned contains:
Failure happened on 'destination' side. ErrorCode=LakehouseOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Lakehouse operation failed for: Operation returned an invalid status code 'InternalServerError'.
Key Observations:
Steps Already Taken:
Solved! Go to Solution.
imo, your setup is fine, the failure is coming from Fabric itself. You are hitting an internal bug in the Copy Data -> Lakehouse Delta writer.
The Copy activity currently struggles with small datasets, delta transaction commits, and certain OAuth token flows when writing through the OneLake endpoint. It creates the transaction files but the writer crashes before committing the delta log. Hence the stray .tmp files in my pov.
What you can do try now ?
Stop using copy data to Lakehouse for now. It is unreliable for Delta sink.
Use Dataflow Gen2 (already works) or spark notebook ingestion (stable).
If you need pipeline automation, call the dataflow or notebook from the pipeline.
Workarounds that might/not help but are worth giving a try:
Ensure the Lakehouse is in the same region as fabric capacity.
Recreate the Lakehouse (metadata corruption sometimes persists).
Use MI auth to the Lakehouse if possible.
I firmly believe that this is a fabric side bug, you cannot fix it through settings. The product team needs to resolve the delta sink commit failure in the copy activity runtime. Log a Fabric support ticket. This is the only way to get the backend team to patch it. Include the tmp files and failed run IDs.
After several calls with Microsoft Support we identified the solution to this problem in this particular instance.
The issue was with VNet gateway configuration: the subnet needed Microsoft.Storage enabled.
Virtual network data gateway best practices | Microsoft Learn
This gateway has been in use for some years across our estate, and has been used successfully ingesting into Fabric from other sources without this config change, and was not something I had come across in when troubleshooting.
Following the change even small data tables are written to Fabric Lakehouse.
@Vinodh247 - I thought you'd be interested 🙂
After several calls with Microsoft Support we identified the solution to this problem in this particular instance.
The issue was with VNet gateway configuration: the subnet needed Microsoft.Storage enabled.
Virtual network data gateway best practices | Microsoft Learn
This gateway has been in use for some years across our estate, and has been used successfully ingesting into Fabric from other sources without this config change, and was not something I had come across in when troubleshooting.
Following the change even small data tables are written to Fabric Lakehouse.
@Vinodh247 - I thought you'd be interested 🙂
thanks for sharing this @BettinaEK , this will be useful for many users who face the same issue
imo, your setup is fine, the failure is coming from Fabric itself. You are hitting an internal bug in the Copy Data -> Lakehouse Delta writer.
The Copy activity currently struggles with small datasets, delta transaction commits, and certain OAuth token flows when writing through the OneLake endpoint. It creates the transaction files but the writer crashes before committing the delta log. Hence the stray .tmp files in my pov.
What you can do try now ?
Stop using copy data to Lakehouse for now. It is unreliable for Delta sink.
Use Dataflow Gen2 (already works) or spark notebook ingestion (stable).
If you need pipeline automation, call the dataflow or notebook from the pipeline.
Workarounds that might/not help but are worth giving a try:
Ensure the Lakehouse is in the same region as fabric capacity.
Recreate the Lakehouse (metadata corruption sometimes persists).
Use MI auth to the Lakehouse if possible.
I firmly believe that this is a fabric side bug, you cannot fix it through settings. The product team needs to resolve the delta sink commit failure in the copy activity runtime. Log a Fabric support ticket. This is the only way to get the backend team to patch it. Include the tmp files and failed run IDs.
Hi Vinodh,
Thank you for confirming what I'd already suspected - and for what else to include in the ticket to MS. It's not listed as a known issue (that I can see) in the Fabric known Issues log.
I've raised the ticket & will cross my fingers!
Kind Regards
B