Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
BettinaEK
Advocate I
Advocate I

CopyData Failure Writing to Fabric Lakehouse (Delta Table) from Azure SQL Managed Instance

Hi All, 

 

I'd be grateful for any light anyone can shed on the following persistent issue 🙏

 

Description of Issue:
I'm experiencing consistent failures when using Copy Data activity in Fabric pipelines to write data from an Azure SQL Managed Instance to a Lakehouse Delta table. 

  • Authentication to Source (SQL MI) is with a Service Principal, and to Lakehouse with OAuth (AAD).
  • I can use the SP (App Reg) to see the data in SSMS; and the data previews successfully in the CopyData Activity.

The pipeline runs for an extended time and then fails. After failure:

  • The Delta table appears in the Lakehouse GUI.
  • The _delta_log folder contains two .tmp files.
  • No Parquet data files are created.

 

The Error returned contains:

Failure happened on 'destination' side. ErrorCode=LakehouseOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Lakehouse operation failed for: Operation returned an invalid status code 'InternalServerError'.

 

Key Observations:

  • The same dataset (4 columns × 204 rows) writes successfully using Dataflow Gen2, confirming Lakehouse and permissions are healthy.
  • Issue persists regardless of:
    • Sink settings: overwrite vs append.
    • Schema handling: schema explicitly defined.
    • Data types: no unsupported types present.
    • Concurrency: no other jobs writing to the table.
  • Partitions are disabled; compression cannot be toggled.
  • Setting Degree of Copy Parallelism = 1 does not resolve the issue.

Steps Already Taken:

  1. Verified Lakehouse permissions and workspace health.
  2. Disabled advanced options (partitioning).
  3. Tested overwrite and append modes.
  4. Compared successful Dataflow Gen2 ingestion with failing CopyData pipeline.
  5. Compared Copy Job to Copy Data activity - fails in the same way with slightly different error.
  6. Compared Connector of type SQL Server and Azure SQL Managed Instance - both fail in the same way, with same error.
  7. Compared write behaviour to Fabric Data Warehouse - fails in the same way, with slightly different error.
2 ACCEPTED SOLUTIONS
Vinodh247
Solution Sage
Solution Sage

imo, your setup is fine, the failure is coming from Fabric itself. You are hitting an internal bug in the Copy Data -> Lakehouse Delta writer. 

 

The Copy activity currently struggles with small datasets, delta transaction commits, and certain OAuth token flows when writing through the OneLake endpoint. It creates the transaction files but the writer crashes before committing the delta log. Hence the stray .tmp files in my pov.

 

What you can do try now ?

  1. Stop using copy data to Lakehouse for now. It is unreliable for Delta sink.

  2. Use Dataflow Gen2 (already works) or spark notebook ingestion (stable).

  3. If you need pipeline automation, call the dataflow or notebook from the pipeline.

Workarounds that might/not help but are worth giving a try:

  • Ensure the Lakehouse is in the same region as fabric capacity.

  • Recreate the Lakehouse (metadata corruption sometimes persists).

  • Use MI auth to the Lakehouse if possible.


I firmly believe that this is a fabric side bug, you cannot fix it through settings. The product team needs to resolve the delta sink commit failure in the copy activity runtime. Log a Fabric support ticket. This is the only way to get the backend team to patch it. Include the tmp files and failed run IDs.

 

Please 'Kudos' and 'Accept as Solution' if this answered your query.

Regards,
Vinodh
Microsoft MVP [Fabric]
LI: https://www.linkedin.com/in/vinodh-kumar-173582132
Blog: vinsdata.in

View solution in original post

BettinaEK
Advocate I
Advocate I

After several calls with Microsoft Support we identified the solution to this problem in this particular instance.

 

The issue was with VNet gateway configuration: the subnet needed Microsoft.Storage enabled.

Virtual network data gateway best practices | Microsoft Learn

 

This gateway has been in use for some years across our estate, and has been used successfully ingesting into Fabric from other sources without this config change, and was not something I had come across in when troubleshooting.

 

Following the change even small data tables are written to Fabric Lakehouse.

 

@Vinodh247 - I thought you'd be interested 🙂

View solution in original post

4 REPLIES 4
BettinaEK
Advocate I
Advocate I

After several calls with Microsoft Support we identified the solution to this problem in this particular instance.

 

The issue was with VNet gateway configuration: the subnet needed Microsoft.Storage enabled.

Virtual network data gateway best practices | Microsoft Learn

 

This gateway has been in use for some years across our estate, and has been used successfully ingesting into Fabric from other sources without this config change, and was not something I had come across in when troubleshooting.

 

Following the change even small data tables are written to Fabric Lakehouse.

 

@Vinodh247 - I thought you'd be interested 🙂

thanks for sharing this @BettinaEK , this will be useful for many users who face the same issue

Please 'Kudos' and 'Accept as Solution' if this answered your query.

Regards,
Vinodh
Microsoft MVP [Fabric]
LI: https://www.linkedin.com/in/vinodh-kumar-173582132
Blog: vinsdata.in
Vinodh247
Solution Sage
Solution Sage

imo, your setup is fine, the failure is coming from Fabric itself. You are hitting an internal bug in the Copy Data -> Lakehouse Delta writer. 

 

The Copy activity currently struggles with small datasets, delta transaction commits, and certain OAuth token flows when writing through the OneLake endpoint. It creates the transaction files but the writer crashes before committing the delta log. Hence the stray .tmp files in my pov.

 

What you can do try now ?

  1. Stop using copy data to Lakehouse for now. It is unreliable for Delta sink.

  2. Use Dataflow Gen2 (already works) or spark notebook ingestion (stable).

  3. If you need pipeline automation, call the dataflow or notebook from the pipeline.

Workarounds that might/not help but are worth giving a try:

  • Ensure the Lakehouse is in the same region as fabric capacity.

  • Recreate the Lakehouse (metadata corruption sometimes persists).

  • Use MI auth to the Lakehouse if possible.


I firmly believe that this is a fabric side bug, you cannot fix it through settings. The product team needs to resolve the delta sink commit failure in the copy activity runtime. Log a Fabric support ticket. This is the only way to get the backend team to patch it. Include the tmp files and failed run IDs.

 

Please 'Kudos' and 'Accept as Solution' if this answered your query.

Regards,
Vinodh
Microsoft MVP [Fabric]
LI: https://www.linkedin.com/in/vinodh-kumar-173582132
Blog: vinsdata.in

Hi Vinodh,

Thank you for confirming what I'd already suspected - and for what else to include in the ticket to MS. It's not listed as a known issue (that I can see) in the Fabric known Issues log. 

I've raised the ticket & will cross my fingers!

 

Kind Regards

B

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.