Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.

Reply
yoa380
Helper I
Helper I

Incremental Refresh - issue

Hello!

I have an issue with the incremental refresh. I followed different YouTube videos where it worked but everytime I publish the model on my Online Space I get a refresh error message : Expression.Error: There weren't enough elements in the enumeration to complete the operation.. #table({"Content", "Name", "Date"}, {}). ;There weren't enough elements in the enumeration to complete the operation.. The exception was raised by the IDbCommand interface.

 

The workspace is Premium and has a diamond logo.

 

Here is how I built my test file in Desktop : 

I call files from a Sharepoint using Sharepoint.Files. I filter down to the test folder I want. My files names have the date (there are daily files). I use their names to get a new column giving me their date. I put this column in Date/Time format.

I create 2 parameters (RangeStart and RangeEnd), both having the mandatory box activated, Date/Time format, middle box is "ALL" and Current Value is 24/02/2025 00:00:00 for Start and 01/03/2025 00:00:00 for End.

Back in the main query : I filter my dateTime column with :

= Table.SelectRows(#"Type modifié", each [Date] >= DateTime.From(RangeStart) and [Date] < DateTime.From(RangeEnd))

Then, it's directly the Transformation automated steps that load the files contents. (I have deleted the automated step saying something like Hide columns, but it doesn't matter as the issue was here before and after I did that).

This is working well in Desktop, I save & load.

I click right on the table and create incremental refresh parameters : I select my table (I have only one), activate the button, Start of data to archive is set for 2 years before refresh date & start of refresh is set with 2 days before refresh date. The schema created seems logic for me. No other box is selected below that.

I load my model online and click on refresh button. After 10 min I always get the error message I gave at the begginning of my post.

 

I have no more ideas on other parameters or things I forgot to do or did wrong ...

1 ACCEPTED SOLUTION

Hi @yoa380 , Thank you for reaching out to the Microsoft Community Forum.

 

The issue you're encountering happens because SharePoint.Files() does not support query folding, which is necessary for Incremental Refresh to function in Power BI Service. As your query doesn't fold, Power BI retrieves all the data before applying the filtering logic, leading to the refresh failure.

 

There are two possible solutions for this:

  1. In Power Query Editor, replace SharePoint.Files("https://yourtenant.sharepoint.com/sites/yoursite") with SharePoint.Contents("https://yourtenant.sharepoint.com/sites/yoursite"). Navigate directly to the folder containing your files. Apply the RangeStart and RangeEnd filters. This approach folds the query, allowing Power BI to push filtering logic to the source, making Incremental refresh work properly.
  2. If SharePoint.Contents() still doesn’t fold, create a Dataflow in Power BI Service, Connect the dataflow to SharePoint and apply transformations, Enable Incremental Refresh on the dataflow. In Power BI Desktop, connect to the dataflow instead of SharePoint directly.

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

View solution in original post

16 REPLIES 16
yoa380
Helper I
Helper I

After watching few more videos and pages, I found another difference with my model. My filter query has "native query" greyed out. Maybe Sharepoint.Files kind of source cannot work with Incremental Refresh .... ? 

Hi @yoa380 , Thank you for reaching out to the Microsoft Community Forum.

 

The issue you're encountering happens because SharePoint.Files() does not support query folding, which is necessary for Incremental Refresh to function in Power BI Service. As your query doesn't fold, Power BI retrieves all the data before applying the filtering logic, leading to the refresh failure.

 

There are two possible solutions for this:

  1. In Power Query Editor, replace SharePoint.Files("https://yourtenant.sharepoint.com/sites/yoursite") with SharePoint.Contents("https://yourtenant.sharepoint.com/sites/yoursite"). Navigate directly to the folder containing your files. Apply the RangeStart and RangeEnd filters. This approach folds the query, allowing Power BI to push filtering logic to the source, making Incremental refresh work properly.
  2. If SharePoint.Contents() still doesn’t fold, create a Dataflow in Power BI Service, Connect the dataflow to SharePoint and apply transformations, Enable Incremental Refresh on the dataflow. In Power BI Desktop, connect to the dataflow instead of SharePoint directly.

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

Hello, I still have one issue. So I have now one request in my model that is consolidating the Sharepoint files. This one contains the start/end parameters. This one also holds the IR rules. I also deleted the pivot step because after testing, I realized doing a pivot (putting a column data into 2 new columns) is crashing online refresh. It tells me the new columns name cannnot be found.

 

Online first refresh works and creates well partitions - taking all files from 1 March 24 to Today.

Even if we have the orange warning that cannot be folded.

 

But in my model, I have a request that takes Consolidated Request as starting point and doing the pivot. My issue is that this one does not contain all the files data previously loaded with IR.

It only contains the files data I previously loaded in desktop before uploading Online.

 

I don't have the issue on Desktop, whatever we load in Consolidated initial Request will be available in all other requests that follow and take it as starting point.

 

Hi @yoa380 , Thank you for reaching out to the Microsoft Community Forum.

 

Your issue arises because Power BI Desktop only references pre-published data in dependent queries, even after IR refreshes the full dataset. To ensure that your pivot query references the fully refreshed partitions, the best solution is to use DirectQuery over the published dataset, as it dynamically queries the latest refreshed data. Alternatively, you can use a Dataflow to manage the extraction and pivot, ensuring it references the fully refreshed dataset.

 

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

Hello, Thank you : if I understand correctly, do you suggest to first create a dataflow in my workspace ? But I was told we can't use IR over a dataflow. Maybe not ? and if yes, what kind of dataflow 1 or gen2? Also, is it possible to create a dataflow when sources are csv/txt files on a Sharepoint?

 

Then, if that's correct, my model will take this dataflow as source and I'll be able to have queries with pivot ?

v-hashadapu
Community Support
Community Support

Hi @yoa380 , Thank you for reaching out to the Microsoft Community Forum.

Please let us know if your issue is solved. If it is, consider marking the answer that helped 'Accept as Solution', so others with similar queries can find it easily. If not, please share the details.
Thank you.

yoa380
Helper I
Helper I

I think I found something but would be good to have confirmation. My archive setup was = 2 years but in my sharepoint test folder I only had 4 files with 4 dates in the past for this week. If I change incremental setup to archive = 4 days, it works Online. I think he needs to find at least a file in Sharepoint corresponding to the beginning of my archive date & only for the first big refresh Online. Then, once done, next refreshes can work even if I delete those files from Sharepoint. My online report still keep old data.

Hi @yoa380  I think your analysis is correct , The issue occurs because Power BI's initial refresh requires data for the entire archive period. If your SharePoint folder only has recent files, the refresh fails as there isn’t enough data to populate the archive.You could try these as workaround

  1. Shorten the Archive Period Temporarily:

    • Set the archive period to match the available files.
    • Publish and perform the initial refresh successfully.
    • After the first refresh, revert to the 2-year archive period if needed.
  2. Add Dummy Files:

    • Add dummy files in SharePoint to cover the 2-year archive.
    • Perform the initial refresh, then delete the dummy files.

Thank you for your confirmation ! just a question, for workaround 1 : once model uploaded online, how do you change incremental period ? (you don't necessarly have to set it up in the model on Desktop and iupload again ? do we have the incremental setup also online once uploaded at least once ?

Hi @yoa380 , Thank you for reaching out to the Microsoft Community Forum.

 

Yes, you are correct. Power BI's initial refresh requires data for the entire archive period. If your SharePoint folder only has recent files, the refresh fails as there isn’t enough data to populate the archive.

 

Here are two workarounds:

  1. In Power BI Desktop, set the archive period to match the available files. Publish and complete the initial refresh successfully. After that, you can increase the archive period in Power BI Service without needing to republish.
  2. Upload temporary files in SharePoint with older dates to match the archive start date. Perform the initial refresh in Power BI Service. Once successful, you can delete the dummy files.

 

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

Hello, thank you for your answer. I have 2 questions :

For workaround 1 : where in Power BI service do you change archive period? Let's say that my archive period is 1 month initially. I will load the model online, perform the first refresh. If I understand correctly, it will load 1 month data + the refresh period. Then if I extand the archive period to like 1 year : the 1 year data won't be here as it only loaded 1 month initially ? And also will it need every single daily files for the period to work ?

Because in my test where I put archive = 7 days for example, it looks like the first refresh only works when in my Sharepoint there are the 7 daily files. If I miss 1 day, it fails. In reality for my real needs, I won't have daily files, sometimes it's weekly. Can it work in some way ?

 

workaround 2 : let's say I put archive is 1 month but only have 1 week of files. Do I need to just create a dummy file with 1 month ago date ? and it will disapear as the archive periode will slide ? or I also need to create dummy files for each day of the period ?

Hi , Thank you for reaching out to the Microsoft Community Forum.

 

  1. Once a dataset with incremental refresh is published, you cannot change the archive period in Power BI Service. Incremental refresh settings must be defined in Power BI Desktop before publishing. If you need to modify the archive period, you must update the settings in Power BI Desktop and republish the dataset.
  2. No, you do not need a dummy file for every missing day. Power BI only requires at least one file within the oldest archive date range for the first refresh. Once the initial refresh is successful, you can delete the dummy files, and Power BI will continue to retain old data as the archive period slides forward automatically.
  3. Yes, but you need to ensure that the query logic correctly selects the latest available file within the refresh period. Instead of expecting daily files, adjust the filtering logic so that Power BI can find the most recent file available within each refresh window.

Also, please refer to this document: Incremental refresh and real-time data for semantic models.

 

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

Hello, thanks for your answer.

So my model is that I have files created in Sharepoint every morning between Monday and Friday (not the weekend). If I create my IR rules on my table with Archive = 12 Months & Refresh = 1 day it will tell me : Archive from 01 March 24 to 6 March 25 (inclusive) and Refresh from 7 March 25 to 7 March 25 (inclusive) Do you confirm that for the first refresh online, I only need to make sure I have a dummy file corresponding to 01 March 24 in Sharepoint ?

 

When the archive period will slide tomorrow by 1 month in April, my 1 Mar 24 dummy data will disappear automaticcally. Any issue if then the oldest file is not 1 April 24 but something in Nov 24 ? (I mean no need anymore file with date = starting archive period / just taking the files including in the period) ?

Hi @yoa380 Yes, you are correct. For the first online refresh, ensure a dummy file exists for the earliest archive date, i.e., January 1, 2024. Once the archive period slides, the dummy file will be excluded automatically. It is not mandatory for the oldest file to match the starting archive period (January 2, 2024, is fine if November 2024 files are present). The system considers files within the defined period only, so this setup will work seamlessly.
For further clarification / Information please refer this : Incremental refresh and real-time data for semantic models.
If this psot helped please do give a kudos and accept this as a solution
Thanks In Advance

Hello, I am still struggling. First online refresh keeps on telling me a column by naming it is missing in a table. I have a column with this name in many tables. I don't have any table named the same as this column.

And the refresh works very well in Desktop BI without any issue.

 

"Expression.Error: The column 'Gap vs min stk objective' of the table wasn't found.. Gap vs min stk objective. . The exception was raised by the IDbCommand interface."

 

To recap : I have in my sharepoint files with dates from 1 march 24, 21 nov 24 then every week day from 17 dec 24 to 7 march 25.

My parameteres are RangeStart = 1 March 24 and RangeEnd = 7 march 25 (but I read these parameters are ok with any values as Online BI will do its logic) They are Date Time format.

 

My desktop model has refreshed successfully (with loading files from 1 march 24 to 6 march 25 (not the 7 march 25) as RangeStart is >= and RangeEnd is <

The exact formula is : Table.SelectRows(#"Type modifié4", each [Date File] >= Date.From(RangeStart) and [Date File] < Date.From(RangeEnd))

The date column used is in Date Format not Date Time (this is why I used Date.From)

After this step in Query, it's directly the automatic fonction to merge file data starting by Hidden Files Filtered etc.... as usual.

 

This table that call Sharepoint files has the IR setup like this : Archive 12 Months (telling me from 1 March 24 to 6 March 25 (inclusive)) and Refresh 1 Day (telling me from 7 march 25 to 7 march 25 (inclusive)).

NO other options is ticked.

 

Do you have an idea ?

Hi @yoa380 , Thank you for reaching out to the Microsoft Community Forum.

 

Please try below:

  1. Check the earliest file that falls within the incremental refresh window. If this file or any file in the period does not contain "Gap vs min stk objective", the refresh will fail. Modify the files manually to include the missing column, even if it's empty.
  2. If modifying the files is not an option, Power Query should be updated to handle missing columns gracefully. Example:
    = Table.AddColumn(Source, "Gap vs min stk objective", each try [Gap vs min stk objective] otherwise null)

This ensures that even if a file does not have the column, the query still runs without errors.

  1. Sometimes, Power BI automatically removes certain files during the "Combine Files" step. double-check this step and ensure that all expected files are included.
  2. If needed, upload only a few files with a known correct schema and run the refresh online. This will confirm whether the issue is with the files or something else.

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.