This time we’re going bigger than ever. Fabric, Power BI, SQL, AI and more. We're covering it all. You won't want to miss it.
Learn moreDid you hear? There's a new SQL AI Developer certification (DP-800). Start preparing now and be one of the first to get certified. Register now
Hi Fabric community
We are experiencing intermittent connection failures when performing full data loads using bulk INSERT statements into a SQL Database in Microsoft Fabric.
The error typically occurs during COMMIT, after a large bulk insert has been running successfully for some time.
DBMS error [TCP Provider: Error code 0x68. Communication link failure]. DBMS error number [104]. SQL State [08S01]. DBMS version [12.00.9114]. Driver [Microsoft][ODBC Driver 18 for SQL Server]. Last query [commit].
The bulk insert runs for a while and then fails sporadically, usually:
A simple retry often succeeds but not always, which suggests a transient connectivity issue rather than a data or schema problem.
Questions
We would like to understand whether this can be mitigated on the Fabric SQL Database side, not only in the application:
The goal is to make large one-time full loads via bulk INSERT stable and predictable in Fabric SQL, without relying purely on retries after failed commits.
Any guidance from Microsoft or others running large-scale bulk ingestion into Fabric SQL would be very helpful.
Thanks in advance!
Regards
Solved! Go to Solution.
Hi @HugoQueiroz-MSF ,
Thank you for the detailed update. Since you’ve already adjusted the ODBC/TCP settings and attempted table slicing, it seems the issue may be related to intermittent connection drops during lengthy bulk load operations.
For very large tables, consider loading the data into a staging table first and using smaller commit batches rather than a single large transaction. Importing the data in smaller partitions or incremental ranges can also help improve load stability.
It could also be helpful to test a Lakehouse or Pipeline based loading method in Microsoft Fabric, rather than relying on a single, long running bulk INSERT via ODBC. Private Link may enhance connection stability once it’s available for Fabric SQL Database, though this isn’t confirmed yet.
Please try these approaches and let us know the behavior so we can review further.
Hi @Oviwan ,
May I know if your issue has been resolved? If you need any additional details or clarification from our side, please let us know.
Thanks.
Hi @Oviwan ,
If you get a chance, please review the responses shared by @HugoQueiroz-MSF . They have correctly pointed out the key points, so kindly check and let us know if you need any additional details.
@HugoQueiroz-MSF Thank you for your valuable response.
@Oviwan T
TCP Provider: Error code 0x68 (SQLState 08S01) typically indicates an unexpected connection drop (it can be several different things). To better understand your specific scenario, we would need additional details about the source system and, ideally, its logs as well. In this case, could you please open a support request so we can investigate in more detail?
As general best practices, consider reducing the transaction scope and duration (for example, committing smaller batches), implementing retry logic, and staging the inserts, which can also help improve reliability and performance.
Hi
We are loading data into Microsoft Fabric SQL Database using HVR (Fivetran), which connects via ODBC.
The source system runs on Linux on‑premises, and the connection is established to the public endpoint of the Fabric SQL Database. We are in active contact with HVR/Fivetran support regarding this issue.
To mitigate overhead and improve stability during large bulk loads, we have already tried the following:
Despite these measures, we continue to see intermittent load failures with:
TCP Provider: Error code 0x68 (08S01)
Based on this and given that client‑side timeouts and keepalive settings have already been optimized, the connection appears to be terminated on the Fabric side, rather than by the Linux/ODBC client.
One of our core tables (general ledger) contains over 1 billion rows. Even slicing by year still results in chunks that are too large and frequently abort with the same error.
Further slicing these large tables into much smaller ranges would require significant additional engineering effort, is difficult to generalize and automate within HVR, and is not very practical for long‑term operations.
I noticed on the Fabric roadmap that Private Link support for SQL Database is planned. Could Private Link help in this scenario by bypassing intermediate network hops or improving connection stability for long‑running ODBC-based bulk loads?
Are there any other recommended patterns, configuration options, or architectural approaches to improve reliability for very large bulk loads into Fabric SQL—beyond aggressively slicing tables?
Any guidance or best practices would be highly appreciated.
Hi @HugoQueiroz-MSF ,
Thank you for the detailed update. Since you’ve already adjusted the ODBC/TCP settings and attempted table slicing, it seems the issue may be related to intermittent connection drops during lengthy bulk load operations.
For very large tables, consider loading the data into a staging table first and using smaller commit batches rather than a single large transaction. Importing the data in smaller partitions or incremental ranges can also help improve load stability.
It could also be helpful to test a Lakehouse or Pipeline based loading method in Microsoft Fabric, rather than relying on a single, long running bulk INSERT via ODBC. Private Link may enhance connection stability once it’s available for Fabric SQL Database, though this isn’t confirmed yet.
Please try these approaches and let us know the behavior so we can review further.
Check out the April 2026 Fabric update to learn about new features.
Sign up to receive a private message when registration opens and key events begin.