Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Did you hear? There's a new SQL AI Developer certification (DP-800). Start preparing now and be one of the first to get certified. Register now

Reply
Oviwan
Advocate I
Advocate I

Intermittent TCP Provider: Error code 0x68 (08S01) during bulk INSERT full loads into Fabric SQL DB

Hi Fabric community

We are experiencing intermittent connection failures when performing full data loads using bulk INSERT statements into a SQL Database in Microsoft Fabric.
The error typically occurs during COMMIT, after a large bulk insert has been running successfully for some time.

Error message

DBMS error [TCP Provider: Error code 0x68. Communication link failure].
DBMS error number [104].
SQL State [08S01].
DBMS version [12.00.9114].
Driver [Microsoft][ODBC Driver 18 for SQL Server].
Last query [commit].

Scenario

  • Target: SQL Database in Microsoft Fabric
  • Load type: Full load
  • Method: Bulk INSERT / INSERT BULK statements
  • Volume: Millions of rows per table
  • Driver: ODBC Driver 18 for SQL Server

The bulk insert runs for a while and then fails sporadically, usually:

  • towards the end of a large transaction
  • at COMMIT time
  • without a deterministic pattern or reproducible table-specific issue

A simple retry often succeeds but not always, which suggests a transient connectivity issue rather than a data or schema problem.

 Questions

We would like to understand whether this can be mitigated on the Fabric SQL Database side, not only in the application:

  1. Are there any Fabric SQL or database-level settings to:
    • increase connection or transaction timeouts?
    • extend allowed execution time for long-running bulk operations?
  2. Are there any limits or behaviors specific to Fabric SQL that could cause connections to be dropped during large bulk INSERT + COMMIT operations?
  3. Is TCP Provider: Error code 0x68 (SQLState 08S01) considered a known transient condition when running heavy bulk loads in Fabric SQL (e.g. due to gateway, proxy, or capacity constraints)?

Goal

The goal is to make large one-time full loads via bulk INSERT stable and predictable in Fabric SQL, without relying purely on retries after failed commits.

Any guidance from Microsoft or others running large-scale bulk ingestion into Fabric SQL would be very helpful.

Thanks in advance!

 

Regards

1 ACCEPTED SOLUTION

Hi @HugoQueiroz-MSF ,

Thank you for the detailed update. Since you’ve already adjusted the ODBC/TCP settings and attempted table slicing, it seems the issue may be related to intermittent connection drops during lengthy bulk load operations.

For very large tables, consider loading the data into a staging table first and using smaller commit batches rather than a single large transaction. Importing the data in smaller partitions or incremental ranges can also help improve load stability.

It could also be helpful to test a Lakehouse or Pipeline based loading method in Microsoft Fabric, rather than relying on a single, long running bulk INSERT via ODBC. Private Link may enhance connection stability once it’s available for Fabric SQL Database, though this isn’t confirmed yet.

 

Please try these approaches and let us know the behavior so we can review further.

View solution in original post

5 REPLIES 5
V-yubandi-msft
Community Support
Community Support

Hi @Oviwan ,

May I know if your issue has been resolved? If you need any additional details or clarification from our side, please let us know.

 

Thanks.

V-yubandi-msft
Community Support
Community Support

Hi @Oviwan ,

If you get a chance, please review the responses shared by @HugoQueiroz-MSF  . They have correctly pointed out the key points, so kindly check and let us know if you need any additional details.

 

@HugoQueiroz-MSF  Thank you for your valuable response.

HugoQueiroz-MSF
Microsoft Employee
Microsoft Employee

@Oviwan T

TCP Provider: Error code 0x68 (SQLState 08S01) typically indicates an unexpected connection drop (it can be several different things). To better understand your specific scenario, we would need additional details about the source system and, ideally, its logs as well. In this case, could you please open a support request so we can investigate in more detail?

As general best practices, consider reducing the transaction scope and duration (for example, committing smaller batches), implementing retry logic, and staging the inserts, which can also help improve reliability and performance.

 

Hi 

We are loading data into Microsoft Fabric SQL Database using HVR (Fivetran), which connects via ODBC.
The source system runs on Linux on‑premises, and the connection is established to the public endpoint of the Fabric SQL Database. We are in active contact with HVR/Fivetran support regarding this issue.

To mitigate overhead and improve stability during large bulk loads, we have already tried the following:

  • Disabled the primary key on very large target tables in Fabric
  • Disabled replication of those tables to the Analytics endpoint to avoid additional overhead during ingestion
  • Implemented table slicing to load data in smaller chunks via HVR
  • Tuned ODBC, TCP timeout, and keepalive parameters on the Linux side to stabilize long‑running connections

Despite these measures, we continue to see intermittent load failures with:

TCP Provider: Error code 0x68 (08S01)

Based on this and given that client‑side timeouts and keepalive settings have already been optimized, the connection appears to be terminated on the Fabric side, rather than by the Linux/ODBC client.

 

One of our core tables (general ledger) contains over 1 billion rows. Even slicing by year still results in chunks that are too large and frequently abort with the same error.

 

Further slicing these large tables into much smaller ranges would require significant additional engineering effort, is difficult to generalize and automate within HVR, and is not very practical for long‑term operations.

 

I noticed on the Fabric roadmap that Private Link support for SQL Database is planned. Could Private Link help in this scenario by bypassing intermediate network hops or improving connection stability for long‑running ODBC-based bulk loads?

 

Are there any other recommended patterns, configuration options, or architectural approaches to improve reliability for very large bulk loads into Fabric SQL—beyond aggressively slicing tables?

 

Any guidance or best practices would be highly appreciated.

Hi @HugoQueiroz-MSF ,

Thank you for the detailed update. Since you’ve already adjusted the ODBC/TCP settings and attempted table slicing, it seems the issue may be related to intermittent connection drops during lengthy bulk load operations.

For very large tables, consider loading the data into a staging table first and using smaller commit batches rather than a single large transaction. Importing the data in smaller partitions or incremental ranges can also help improve load stability.

It could also be helpful to test a Lakehouse or Pipeline based loading method in Microsoft Fabric, rather than relying on a single, long running bulk INSERT via ODBC. Private Link may enhance connection stability once it’s available for Fabric SQL Database, though this isn’t confirmed yet.

 

Please try these approaches and let us know the behavior so we can review further.

Helpful resources

Announcements
April Fabric Update Carousel

Fabric Monthly Update - April 2026

Check out the April 2026 Fabric update to learn about new features.

Fabric SQL PBI Data Days

Data Days 2026 coming soon!

Sign up to receive a private message when registration opens and key events begin.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Top Solution Authors