Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.
Hi,
Ref. the attached snapshot, I am ending up with a long list of unwanted queries that I do not use at all. I am aware that they are useful for debugging. But I have managed to identify the errors with 'Advanced Editor'. Now, I can barely see 'main queries' and this long list needs to be removed or hidden. Any idea for this? thanks
Sure, here is an example of a CSV import/merge. About 100 CSV files, all horribly wide (135+columns) and each about 400K rows. I only need a small subset of the columns.
Here's the merge code:
let
Source = SharePoint.Contents("https://xxx.sharepoint.com/sites/yyy", [ApiVersion = 15]),
#"Shared Documents" = Source{[Name="Shared Documents"]}[Content],
#"Filtered Rows" = Table.SelectRows(#"Shared Documents", each (Text.StartsWith([Name], "zzz") and [Extension] = ".csv")),
#"Renamed Columns" = Table.RenameColumns(#"Filtered Rows",{{"Name", "Source.Name"}}),
#"Added Custom 0" = Table.AddColumn(#"Renamed Columns", "CSV", each #"Get CSV file"([Content])),
#"Removed Other Columns1" = Table.SelectColumns(#"Added Custom 0", {"Source.Name", "CSV","Date modified"}),
#"Expanded CSV" = Table.ExpandTableColumn(#"Removed Other Columns1", "CSV", Columns,null)
in
#"Expanded CSV"
Note the last step - it references "Columns" which is a predefined list query like this
{"Close Date",
"Fiscal Period",
"SFDC Opportunity Id",
"Opportunity Type",
"Forecast Category",
"Sales Territory ID",
"Product Line",
"GBU",
"Created Date",
"Deal Reg ID",
"Deal Reg Status",
"Pull Date",
"Product Category",
"Product Focus"}
The same list is also used when acquiring the CSV file.
(File) => let
Source = try Csv.Document(Binary.Buffer(File),[Delimiter=",", Encoding=1252, QuoteStyle=QuoteStyle.Csv])
otherwise Csv.Document(Binary.Buffer(File),[Delimiter=",", Encoding=1252, QuoteStyle=QuoteStyle.Csv])
in
Table.SelectColumns(Table.PromoteHeaders(Source, [PromoteAllScalars=true]),Columns)
As you may know sharepoint is a bad location (performance wise) for pulling lots of files fast so the acquisition function tries to harden the process a bit by adding a retry option (with apologies to try...otherwise) and enforced buffering. This stuff may not be required in your situation.
That's all you need - no sample files, no parameters, no mess.
Once you understand how to merge files efficiently you don't need any of these queries. What you do need is a list of the columns in your source files. You need to specify that column list during Table.Combine() , and these helper functions do that via reading a sample file twice. To avoid that you use the predefined column list.
Can you please advise (with an example) how to use the' predefined column list'? do you mean I am using an inefficient method of merging the files and that's why I ended up with such a long list of unnecessary queries?
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Power BI update to learn about new features.
User | Count |
---|---|
15 | |
12 | |
8 | |
8 | |
7 |
User | Count |
---|---|
15 | |
13 | |
9 | |
7 | |
6 |