Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

View all the Fabric Data Days sessions on demand. View schedule

Reply
Yusuf7
Frequent Visitor

Transport Level Error on Stored Procedure Execution in Fabric Warehouse

Msg 10054, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)

 

Key details:

  • The error occurs for certain stored procedures but not all.
  • These procedures involve JSON parsing and transformations (e.g., using OPENJSON and AT TIME ZONE), source is the same fabric warehouse.
  • The same operations run smooth with dataflow gen2, but takes considerably longer to run. 
  • If this starts happening to other procedures, it could halt our production pipeline.

 

Questions:

 

How can I fix this? The official documentation isn't helpful as this is happenning for a very specific SP, not all. Therefore, couldn't resolve. 

4 REPLIES 4
Yusuf7
Frequent Visitor

Hi  @burakkaragoz

 

Thank you so much for sharing the three workarounds — I really appreciate your effort and quick suggesstions.

 

A few key observations from my side:

 

  1. The same query fails when executed as a stored procedure, but runs perfectly fine as a standalone query within 5–6 seconds (processing ~300–400k records from our telemetry staging layer in Snowflake).
  2. Even with only 4–5k new records, the stored procedure still fails, so data volume doesn’t seem to be the main issue.
  3. The dataflow I’m using as an alternative runs the same SQL (not Power Query) for transformations and completes within 2–4 minutes.
  4. Designing the stored procedure to populate an intermediary table appears to work, but I’d prefer to find a permanent solution before relying on this workaround. Similar queries in other databases (like Snowflake) work perfectly fine.

 

Thanks again for your assistance!

 

v-hjannapu
Community Support
Community Support

Hi @Yusuf7,

I would also take a moment to thank @burakkaragoz  , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
 

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions.

Regards,
Community Support Team.

Hi Community Support Team

 

Have tested with the first workaround and seems to work, but puts an additionmal step in the design process. 

 

The transport level error issue which we encountered seems like a major potential problem unless we can identify documented root cause / reasons. 

 

Requesting further assistance on this please.

 

burakkaragoz
Community Champion
Community Champion

Hi @Yusuf7 ,

 

This is a frustrating error because "Transport-level error" usually sounds like a network glitch. However, in the context of Fabric Warehouse executing heavy Stored Procedures, it often points to a Resource Governance kill.

Since you mentioned this specifically happens with SPs involving OPENJSON and heavy transformations, you are likely hitting a memory or transaction log limit on the compute node. This causes the backend to forcibly close the connection to protect the warehouse health.

 

Here is why this happens and how to fix it:

1. The "OPENJSON" Overhead Parsing large JSON strings directly in the Warehouse engine can be very memory-intensive. If your SP tries to parse millions of rows in a single INSERT...SELECT statement, it might spike the resource usage beyond the capacity of your current SKU. This leads to a disconnect.

2. Mitigation Strategies

  • Strategy A: Materialize Intermediate Results (CTAS) Instead of doing OPENJSON + AT TIME ZONE + INSERT all in one go, try to break it down. Use CREATE TABLE AS SELECT (CTAS) to first parse the JSON into a temporary (or staging) table. Then run a second step to apply the Time Zone transformation. This helps the engine manage memory better than one giant query.

  • Strategy B: Batching If the source table is huge, do not process it all at once. Loop through your data in chunks (e.g., process 100,000 rows at a time based on an ID or Date range). This keeps the transaction size small and prevents timeouts.

  • Strategy C: Check Statistics Ensure statistics are up to date on the source table. Sometimes a bad execution plan causes the engine to allocate way more memory than needed for the JSON parsing. Run: UPDATE STATISTICS [YourSchema].[YourTable]

Why Dataflow Gen2 works: You noticed Dataflow Gen2 works but is slow. That is because Dataflow handles row-by-row streaming and pagination better. Whereas the Warehouse tries to do a massive set-based operation which hits the limit.

 

Try the CTAS approach first. It is usually the performance winner in Fabric DW!


If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
This response was assisted by AI for translation and formatting purposes.

Helpful resources

Announcements
November Fabric Update Carousel

Fabric Monthly Update - November 2025

Check out the November 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.