Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
Msg 10054, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
Key details:
Questions:
How can I fix this? The official documentation isn't helpful as this is happenning for a very specific SP, not all. Therefore, couldn't resolve.
Thank you so much for sharing the three workarounds — I really appreciate your effort and quick suggesstions.
A few key observations from my side:
Thanks again for your assistance!
Hi @Yusuf7,
I would also take a moment to thank @burakkaragoz , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions.
Regards,
Community Support Team.
Hi Community Support Team
Have tested with the first workaround and seems to work, but puts an additionmal step in the design process.
The transport level error issue which we encountered seems like a major potential problem unless we can identify documented root cause / reasons.
Requesting further assistance on this please.
Hi @Yusuf7 ,
This is a frustrating error because "Transport-level error" usually sounds like a network glitch. However, in the context of Fabric Warehouse executing heavy Stored Procedures, it often points to a Resource Governance kill.
Since you mentioned this specifically happens with SPs involving OPENJSON and heavy transformations, you are likely hitting a memory or transaction log limit on the compute node. This causes the backend to forcibly close the connection to protect the warehouse health.
Here is why this happens and how to fix it:
1. The "OPENJSON" Overhead Parsing large JSON strings directly in the Warehouse engine can be very memory-intensive. If your SP tries to parse millions of rows in a single INSERT...SELECT statement, it might spike the resource usage beyond the capacity of your current SKU. This leads to a disconnect.
2. Mitigation Strategies
Strategy A: Materialize Intermediate Results (CTAS) Instead of doing OPENJSON + AT TIME ZONE + INSERT all in one go, try to break it down. Use CREATE TABLE AS SELECT (CTAS) to first parse the JSON into a temporary (or staging) table. Then run a second step to apply the Time Zone transformation. This helps the engine manage memory better than one giant query.
Strategy B: Batching If the source table is huge, do not process it all at once. Loop through your data in chunks (e.g., process 100,000 rows at a time based on an ID or Date range). This keeps the transaction size small and prevents timeouts.
Strategy C: Check Statistics Ensure statistics are up to date on the source table. Sometimes a bad execution plan causes the engine to allocate way more memory than needed for the JSON parsing. Run: UPDATE STATISTICS [YourSchema].[YourTable]
Why Dataflow Gen2 works: You noticed Dataflow Gen2 works but is slow. That is because Dataflow handles row-by-row streaming and pagination better. Whereas the Warehouse tries to do a massive set-based operation which hits the limit.
Try the CTAS approach first. It is usually the performance winner in Fabric DW!
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
This response was assisted by AI for translation and formatting purposes.
| User | Count |
|---|---|
| 3 | |
| 2 | |
| 2 | |
| 1 | |
| 1 |