Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
jFloury
Frequent Visitor

Error on Dataflow Gen2 writing to destination

Hello,

 

I have a Dataflow Gen2 which writes its results in a Lakehouse, in Fabric, in the same workspace.

It works perfectly on small volumes.

 

For an unknown error, when I try it on larger volumes of data (I don't know if it is the cause), I encouter the following error :

 

A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) Details: Reason = DataSource.Error;ErrorCode = Lakehouse036;Message = Microsoft SQL : A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.);Detail = [DataSourceKind = "Lakehouse", DataSourcePath = "Lakehouse", DataSourceKind.2 = "SQL", DataSourcePath.2 =

 

It seems to be at the state of writing to destination, but I'm not sure.

 

Anyone experiencing such issue ? Any clue ?

 

It is quite difficult to debut the issue!

 

Thanks for any help

 

1 ACCEPTED SOLUTION
andrewsommer
Super User
Super User

This error is commonly associated with connection stability issues, particularly when handling larger datasets. Possible causes:

When dealing with large datasets, your query might be taking too long, causing the connection to timeout.  The server might be closing the connection due to resource constraints.

 

Lakehouse Resource Limitations: Fabric Lakehouse has compute and storage constraints; large writes might exceed these limits.

 

Concurrency Issues:  If multiple processes are trying to write to the same Lakehouse at the same time, the server may close connections to prevent overload.

 

Fabric or SQL Backend Issue:  Since Lakehouse in Fabric uses SQL-based access, internal SQL service limits (such as memory pressure) might cause forced disconnections.

 

Please mark this post as solution if it helps you. Appreciate Kudos.

 

View solution in original post

5 REPLIES 5
v-csrikanth
Community Support
Community Support

Hi @jFloury 

I wanted to follow up since I haven't heard from you in a while. Have you had a chance to try the suggested solutions?
If your issue is resolved, please consider marking the post as solved. However, if you're still facing challenges, feel free to share the details, and we'll be happy to assist you further.
Looking forward to your response!


Best Regards,
Community Support Team _ C Srikanth.

burakkaragoz
Community Champion
Community Champion

Hi @jFloury ,

 

This error typically indicates a network or resource timeout issue, especially when dealing with larger data volumes. Here are a few suggestions to help troubleshoot and potentially resolve it:

Suggestions:

  1. Split the Load
    Try breaking the data into smaller partitions or batches. This can help avoid timeouts or memory pressure during the write operation.

  2. Increase Timeout Settings (if configurable)
    If you're using a custom connector or pipeline, check if there are timeout or retry settings you can increase.

  3. Check Workspace Capacity
    If you're on a trial or low-capacity SKU, the environment might not be able to handle large data writes. Consider testing in a higher-capacity workspace if available.

  4. Monitor Resource Usage
    Use the Monitoring Hub in Fabric to check for memory or CPU spikes during the Dataflow execution. This might help confirm if the issue is resource-related.

  5. Retry with Logging Enabled
    Enable detailed logging in your Dataflow Gen2 settings (if available) to capture more context around the failure point.

  6. Contact Microsoft Support
    Since the error includes a specific code (Lakehouse036), Microsoft support might be able to provide more insight into what that code means in your context.

Let me know if you'd like help designing a batching strategy or checking capacity settings.

v-csrikanth
Community Support
Community Support

Hi @jFloury 

It's been a while since I heard back from you and I wanted to follow up. Have you had a chance to try the solutions that have been offered?
If the issue has been resolved, can you mark the post as resolved? If you're still experiencing challenges, please feel free to let us know and we'll be happy to continue to help!
Looking forward to your reply!

Best Regards,
Community Support Team _ C Srikanth.

v-csrikanth
Community Support
Community Support

Hi @jFloury 
Sorry for the late response.

Thank you for being part of the Microsoft Fabric Community.

As highlighted by @andrewsommer , the proposed approach appears to effectively address your requirements. Could you please confirm if your issue has been resolved?
If you are still facing any challenges, kindly provide further details, and we will be happy to assist you.

Best Regards,
Cheri Srikanth

andrewsommer
Super User
Super User

This error is commonly associated with connection stability issues, particularly when handling larger datasets. Possible causes:

When dealing with large datasets, your query might be taking too long, causing the connection to timeout.  The server might be closing the connection due to resource constraints.

 

Lakehouse Resource Limitations: Fabric Lakehouse has compute and storage constraints; large writes might exceed these limits.

 

Concurrency Issues:  If multiple processes are trying to write to the same Lakehouse at the same time, the server may close connections to prevent overload.

 

Fabric or SQL Backend Issue:  Since Lakehouse in Fabric uses SQL-based access, internal SQL service limits (such as memory pressure) might cause forced disconnections.

 

Please mark this post as solution if it helps you. Appreciate Kudos.

 

Helpful resources

Announcements
FabCon Global Hackathon Carousel

FabCon Global Hackathon

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!

September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.