Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Score big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount

Reply
Sammycat
New Member

Slow query - multiple appended queries

Hi

 

I have a power query (excel) that reads and cleans data from text files.  

 

Theew is a separate text file for each month and I append the new month's query to the original query each month.  

I copy the M code for the previous month's query, create a new query for the new month, paste the M code into the new query, and and append the new query to the original query (which happens to be month 1).

 

So, effectively I have one big query that includes all the months' data to date, plus individual queries for each month.

 

The model takes an age to refresh (at least daily) and I suspect it's due to the duplication of data.

 

I'm sure there must be a more efficient way to structure this - can someone please help/advise?

 

Thanks in adavance! 

6 REPLIES 6
BA_Pete
Super User
Super User

Hi @Sammycat ,

 

Put all of your text files into a single folder, either on your network or SharePoint.

In Power Query, use a Folder Connector to connect to this folder.

When prompted, choose 'Transform & Combine" (or similar wording).

This will generate you a load of helper queries. The one you want is called something like 'Transform Sample File'. Whatever transformations you apply to one file in this query will be applied to all files in the same folder, and all files transformed in this way will be automatically appended into a master query.

 

Once you recive your new file each month, drop it into the same folder with the rest and, the next time your report refreshs, it will include the new data in the process. Voila!

 

More details:

https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-combine-binaries 

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




Hi @BA_Pete 

The file is created automatically from a process control system and is updated hourly.  There are several files created and dumped periodically with different schemas and data - I'm only interested in one. So for this to work would require manually copying the required file to a dedicated folder each time the query was to be refreshed.  

 

 

Where does the process control system save the generated files to?

You could use Power Auomate to detect when a new file with a specific name was generated and have it automatically copy the file into your query source folder.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




The files are saved to an on premises server.

 

Ok, is that some form of Blob Storage? If so, you could potentially query the file structure with PQ, filter to only files that have a name like the ones you want, and replicate the Folder Connector behaviour by creating your transformations in a function.

It sounds complicated, but it's not really.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




AntrikshSharma
Super User
Super User

@Sammycat 

You don't need to manually copy paste this, just have PQ point to a folder

You can place the file on Sharepoint and setup Incremental refresh.

You can try Enhanced compute Engine in the dataflow which will dump the data in a managed SQL instance so your queries will fold and will be fast

Helpful resources

Announcements
August Power BI Update Carousel

Power BI Monthly Update - August 2025

Check out the August 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors