Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
DougalR
New Member

Incremental Refresh Failing

Hi all,

 

I have a report built in PowerBI Desktop that reads hundreds of CSV files - all carrying slightly different data, and loaded into ~ 19 tables.  In the Desktop version, it loads fine - a years worth of files into each table, but I want to push online and only refresh the latest data.

This failed on first go, so I tried reducing the data set to only store a month, and refresh a week (so it covers long weekends etc.

I now have this error:

>Data source errorDataSource.Error: Downstream service call failed with status code 'https://api.powerbi.com/powerbi/globalservice/v201606/clusterdetails'.. Microsoft.Data.Mashup.ErrorCode = PowerPlatformDataflows018. Error = error "Microsoft.Mashup.Engine1.Runtime.ValueException: [DataFormat.Error] We reached the end of the buffer.#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.CreateValueForThrow(IThrowExpression throwExpr)#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.<>c__DisplayClass23_0.<CreateValueForRecord>b__0(Int32 index)#(cr)#(lf) at Microsoft.Mashup.Engine1.Runtime.RecordValue.DemandRecordValue.get_Item(Int32 index)#(cr)#(lf) at Microsoft.Data.Mashup.ProviderCommon.MashupResource.TryGetValue(Func`1 getValue, IValue& value, String& errorMessage)#(cr)#(lf)Record". ErrorCode = error "Microsoft.Mashup.Engine1.Runtime.ValueException: [DataFormat.Error] We reached the end of the buffer.#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.CreateValueForThrow(IThrowExpression throwExpr)#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.<>c__DisplayClass23_0.<CreateValueForRecord>b__0(Int32 index)#(cr)#(lf) at Microsoft.Mashup.Engine1.Runtime.RecordValue.DemandRecordValue.get_Item(Int32 index)#(cr)#(lf) at Microsoft.Data.Mashup.ProviderCommon.MashupResource.TryGetValue(Func`1 getValue, IValue& value, String& errorMessage)#(cr)#(lf)Record". RequestId = error "Microsoft.Mashup.Engine1.Runtime.ValueException: [DataFormat.Error] We reached the end of the buffer.#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.CreateValueForThrow(IThrowExpression throwExpr)#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.<>c__DisplayClass23_0.<CreateValueForRecord>b__0(Int32 index)#(cr)#(lf) at Microsoft.Mashup.Engine1.Runtime.RecordValue.DemandRecordValue.get_Item(Int32 index)#(cr)#(lf) at Microsoft.Data.Mashup.ProviderCommon.MashupResource.TryGetValue(Func`1 getValue, IValue& value, String& errorMessage)#(cr)#(lf)Record". RequestUrl = https://api.powerbi.com/powerbi/globalservice/v201606/clusterdetails. ErrorMessage = error "Microsoft.Mashup.Engine1.Runtime.ValueException: [DataFormat.Error] We reached the end of the buffer.#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.CreateValueForThrow(IThrowExpression throwExpr)#(cr)#(lf) at Microsoft.Mashup.Engine1.Language.ValueCreator.<>c__DisplayClass23_0.<CreateValueForRecord>b__0(Int32 index)#(cr)#(lf) at Microsoft.Mashup.Engine1.Runtime.RecordValue.DemandRecordValue.get_Item(Int32 index)#(cr)#(lf) at Microsoft.Data.Mashup.ProviderCommon.MashupResource.TryGetValue(Func`1 getValue, IValue& value, String& errorMessage)#(cr)#(lf)Record". ;Downstream service call failed with status code 'https://api.powerbi.com/powerbi/globalservice/v201606/clusterdetails'.. The exception was raised by the IDbCommand interface.

Obviously not the clearest to decode.

Unless there's something obvious, I'm thinking of setting up a seperate dataflow for each file in the service, and connecting my online report to that.  Its probably better longer term - if something fails, then its easier to diagnose the component.

Thoughts - why would it work in the desktop and not online (it has access to the sharepoint drive, I've built several other reports the same way that all work).

1 ACCEPTED SOLUTION
tayloramy
Community Champion
Community Champion

Hi @DougalR

 

The scary bit in your message is DataFormat.Error: We reached the end of the buffer. That almost always points to a malformed CSV somewhere (unclosed quote, rogue line break, wrong delimiter/encoding, or an inconsistent column count). The “clusterdetails” part is just the Service wrapper call that bubbled up the error; it’s not the true cause. See: Csv.Document documentation and examples of quoted line-break handling.

  • In Power Query, open the Transform Sample File for the folder combine and make it error-tolerant:
    1. In the Source step for the CSV, set QuoteStyle=QuoteStyle.Csv, specify Encoding=65001 (UTF-8), and add ExtraValues=ExtraValues.Ignore. This handles quoted line breaks and extra trailing columns gracefully. Docs - Connector options
    2. Avoid hard-coding a Columns= count unless every file is guaranteed identical.
    3. Wrap the per-file parse in try ... otherwise so you can identify the bad file(s) instead of failing the whole refresh (sample snippet below).
  • After you stabilize the CSV parsing, re-enable incremental refresh. Be aware that with file sources (SharePoint/CSV), query folding is limited; IR still creates partitions, but it may have to scan more data than a relational source. If refresh time matters, stage the files into a Dataflow Gen2 or Lakehouse table with a clean schema and an ingestion date, and point your semantic model’s IR at that curated table. IR overviewIR troubleshootingText/CSV connector tips

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

View solution in original post

5 REPLIES 5
tayloramy
Community Champion
Community Champion

Hi @DougalR

 

The scary bit in your message is DataFormat.Error: We reached the end of the buffer. That almost always points to a malformed CSV somewhere (unclosed quote, rogue line break, wrong delimiter/encoding, or an inconsistent column count). The “clusterdetails” part is just the Service wrapper call that bubbled up the error; it’s not the true cause. See: Csv.Document documentation and examples of quoted line-break handling.

  • In Power Query, open the Transform Sample File for the folder combine and make it error-tolerant:
    1. In the Source step for the CSV, set QuoteStyle=QuoteStyle.Csv, specify Encoding=65001 (UTF-8), and add ExtraValues=ExtraValues.Ignore. This handles quoted line breaks and extra trailing columns gracefully. Docs - Connector options
    2. Avoid hard-coding a Columns= count unless every file is guaranteed identical.
    3. Wrap the per-file parse in try ... otherwise so you can identify the bad file(s) instead of failing the whole refresh (sample snippet below).
  • After you stabilize the CSV parsing, re-enable incremental refresh. Be aware that with file sources (SharePoint/CSV), query folding is limited; IR still creates partitions, but it may have to scan more data than a relational source. If refresh time matters, stage the files into a Dataflow Gen2 or Lakehouse table with a clean schema and an ingestion date, and point your semantic model’s IR at that curated table. IR overviewIR troubleshootingText/CSV connector tips

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

Thanks for the feedback - yes I've decided the best way is to use a Dataflow Gen2, so I'm currently copying my code online, I've got 2/3 of it up and working so I think thats the way forward, much easier to diagnose where the error is from!

Hi @DougalR 

Glad to hear you’ve made good progress with Dataflow Gen2. Could you share an update on whether you were able to get the remaining part working as expected ? 

 

Hi, so yeah dont know why I didnt do it earlier buy all the data points I am loading are now up and running as Dataflow's, I'm just about done remapping them into my report but on intial tests its all working.

Glad you have a path forward @DougalR

 

If your problem is resolved, please mark a post as the solution. This will help future forum members find answers quickly. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

Helpful resources

Announcements
FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.