Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
saglamtimur
Resolver II
Resolver II

Fabric pipeline - The stream was already consumed. It cannot be read again.

Hi,

 

For one of my Fabric pipelines, I am getting below error message after 3 hrs. My pipeline contains just a copy activity, connects to on-prem  SQL server. Data is 16M rows, 10 columns. Destination is a LakeHouse table.

 

Failure happened on 'destination' side. ErrorCode=LakehouseOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Lakehouse operation failed for: The stream was already consumed. It cannot be read again.. Workspace: 'xxxxxx-4644512b9051'. Path: 'xxxxx-e9ec19401cee/Tables/dbo/Sales/xxxxxx-dbc2acc54ff4.parquet'..,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.InvalidOperationException,Message=The stream was already consumed. It cannot be read again.,Source=System.Net.Http,'

 

Any ideas?

1 ACCEPTED SOLUTION

Thanks for getting back. 

utilization seems very low if are ingesting 16m rows and pipeline ran for 3 hours.

 

Fabric pipelines process batches using memory-optimized streams to handle large datasets efficiently. A 16M-row transfer exceeds default buffer capacities, causing premature stream disposal.

 

 

 


Problem can be at source side as well with IR. Ideally self-hosted IR (minimum 16GB RAM for 16M rows)

put this in json file of pipeline to enable buffer : 

 

Insert Buffer Configuration


Add these properties under `typeProperties`:

 

 

"typeProperties": {
"enableBuffer": true,
"bufferSize": 20000,
"parallelCopies": 4,
"source": {
"type": "SqlServerSource",
"queryTimeout": "02:00:00"
},
"sink": {
"type": "LakehouseSink",
"writeBehavior": "insert"
}
}

This allows stream repositioning after transient failures

 

i haven't tried this. But please give it a try

 

View solution in original post

9 REPLIES 9
v-veshwara-msft
Community Support
Community Support

Hi @saglamtimur ,

We’re following up regarding your query. If it has been resolved, please mark the helpful reply as the Accepted Solution to assist others facing similar challenges.

If you still need assistance, please let us know.
Thank you.

v-veshwara-msft
Community Support
Community Support

Hi @saglamtimur ,

Following up to see if your query has been resolved. If any of the responses helped, please consider marking the relevant reply as the 'Accepted Solution' to assist others with similar questions.

If you're still facing issues, feel free to reach out.

Thank you.

Also thanks @nilendraFabric and @BIByte for your advicescand continued assistance.

v-veshwara-msft
Community Support
Community Support

Hi @saglamtimur ,
Thank you for engaging with the Microsoft Fabric Community.

As we haven’t heard from you in a while, we hope your issue has been resolved. If any of the responses here were helpful, please consider marking them as the Accepted Solution to assist others with similar queries.

 

If the issue was resolved through a support ticket, we’d greatly appreciate it if you could share any key insights or solutions provided by the support team, as this could benefit the wider community.

Thank you once again.

nilendraFabric
Community Champion
Community Champion

Hello @saglamtimur 

 

Here is my guess. No docs for this. Hope it makes sense

 

Fabric Trial (F64) uses Capacity Unit smoothing, spreading compute resources over 24 hours. A 16M-row copy activity exceeding 3 hours likely hits CU limits, causing:
• Partial stream processing followed by forced termination
• Automatic retries reusing the same consumed stream


please use Fabric’s Capacity Metrics app to identify throttling patterns.

 

Trial capacities lack dedicated compute.

 

Process 1-2M rows per batch , and see if this works.

 

Hope this helps.

Utilization 5% (max), Throttling 2% (max), and no overages.

Thanks for getting back. 

utilization seems very low if are ingesting 16m rows and pipeline ran for 3 hours.

 

Fabric pipelines process batches using memory-optimized streams to handle large datasets efficiently. A 16M-row transfer exceeds default buffer capacities, causing premature stream disposal.

 

 

 


Problem can be at source side as well with IR. Ideally self-hosted IR (minimum 16GB RAM for 16M rows)

put this in json file of pipeline to enable buffer : 

 

Insert Buffer Configuration


Add these properties under `typeProperties`:

 

 

"typeProperties": {
"enableBuffer": true,
"bufferSize": 20000,
"parallelCopies": 4,
"source": {
"type": "SqlServerSource",
"queryTimeout": "02:00:00"
},
"sink": {
"type": "LakehouseSink",
"writeBehavior": "insert"
}
}

This allows stream repositioning after transient failures

 

i haven't tried this. But please give it a try

 

saglamtimur
Resolver II
Resolver II

@BIByte  thanks for your advice and you're totally right. As this is a trial capacity and all my data is related to Contoso and will be expired in 3 days, I don't mind it.

@nilendraFabric it's trial.

BIByte
Frequent Visitor

Just a word of advice from a security perspective please remove any references to GUIDs like the workspace ID. 

nilendraFabric
Community Champion
Community Champion

Hey @saglamtimur 

 

which F SKU are you on?

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.