Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi,
For one of my Fabric pipelines, I am getting below error message after 3 hrs. My pipeline contains just a copy activity, connects to on-prem SQL server. Data is 16M rows, 10 columns. Destination is a LakeHouse table.
Failure happened on 'destination' side. ErrorCode=LakehouseOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Lakehouse operation failed for: The stream was already consumed. It cannot be read again.. Workspace: 'xxxxxx-4644512b9051'. Path: 'xxxxx-e9ec19401cee/Tables/dbo/Sales/xxxxxx-dbc2acc54ff4.parquet'..,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.InvalidOperationException,Message=The stream was already consumed. It cannot be read again.,Source=System.Net.Http,'
Any ideas?
Solved! Go to Solution.
Thanks for getting back.
utilization seems very low if are ingesting 16m rows and pipeline ran for 3 hours.
Fabric pipelines process batches using memory-optimized streams to handle large datasets efficiently. A 16M-row transfer exceeds default buffer capacities, causing premature stream disposal.
Problem can be at source side as well with IR. Ideally self-hosted IR (minimum 16GB RAM for 16M rows)
put this in json file of pipeline to enable buffer :
Insert Buffer Configuration
Add these properties under `typeProperties`:
"typeProperties": {
"enableBuffer": true,
"bufferSize": 20000,
"parallelCopies": 4,
"source": {
"type": "SqlServerSource",
"queryTimeout": "02:00:00"
},
"sink": {
"type": "LakehouseSink",
"writeBehavior": "insert"
}
}
This allows stream repositioning after transient failures
i haven't tried this. But please give it a try
Hi @saglamtimur ,
We’re following up regarding your query. If it has been resolved, please mark the helpful reply as the Accepted Solution to assist others facing similar challenges.
If you still need assistance, please let us know.
Thank you.
Hi @saglamtimur ,
Following up to see if your query has been resolved. If any of the responses helped, please consider marking the relevant reply as the 'Accepted Solution' to assist others with similar questions.
If you're still facing issues, feel free to reach out.
Thank you.
Also thanks @nilendraFabric and @BIByte for your advicescand continued assistance.
Hi @saglamtimur ,
Thank you for engaging with the Microsoft Fabric Community.
As we haven’t heard from you in a while, we hope your issue has been resolved. If any of the responses here were helpful, please consider marking them as the Accepted Solution to assist others with similar queries.
If the issue was resolved through a support ticket, we’d greatly appreciate it if you could share any key insights or solutions provided by the support team, as this could benefit the wider community.
Thank you once again.
Hello @saglamtimur
Here is my guess. No docs for this. Hope it makes sense
Fabric Trial (F64) uses Capacity Unit smoothing, spreading compute resources over 24 hours. A 16M-row copy activity exceeding 3 hours likely hits CU limits, causing:
• Partial stream processing followed by forced termination
• Automatic retries reusing the same consumed stream
please use Fabric’s Capacity Metrics app to identify throttling patterns.
Trial capacities lack dedicated compute.
Process 1-2M rows per batch , and see if this works.
Hope this helps.
Utilization 5% (max), Throttling 2% (max), and no overages.
Thanks for getting back.
utilization seems very low if are ingesting 16m rows and pipeline ran for 3 hours.
Fabric pipelines process batches using memory-optimized streams to handle large datasets efficiently. A 16M-row transfer exceeds default buffer capacities, causing premature stream disposal.
Problem can be at source side as well with IR. Ideally self-hosted IR (minimum 16GB RAM for 16M rows)
put this in json file of pipeline to enable buffer :
Insert Buffer Configuration
Add these properties under `typeProperties`:
"typeProperties": {
"enableBuffer": true,
"bufferSize": 20000,
"parallelCopies": 4,
"source": {
"type": "SqlServerSource",
"queryTimeout": "02:00:00"
},
"sink": {
"type": "LakehouseSink",
"writeBehavior": "insert"
}
}
This allows stream repositioning after transient failures
i haven't tried this. But please give it a try
@BIByte thanks for your advice and you're totally right. As this is a trial capacity and all my data is related to Contoso and will be expired in 3 days, I don't mind it.
@nilendraFabric it's trial.
Just a word of advice from a security perspective please remove any references to GUIDs like the workspace ID.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
9 | |
5 | |
4 | |
3 | |
3 |