Get certified in Microsoft Fabric—for free! For a limited time, the Microsoft Fabric Community team will be offering free DP-600 exam vouchers. Prepare now
Hi,
I am using = Table.SelectRows(Table1, each List.Contains(List, [UID])) to filter out 300 categories from a table that has 1.6mil rows. However, it is loading very slow when I apply the query whereas the 1.6mil row table has already finished loading. Theorectically, the new table should have less than 1.6mil rows, but why is it taking forever to load? Is there a better way doing it?
Thanks!
Daren
Solved! Go to Solution.
Hi @darentengdrake ,
Are you using a direct query mode?Why not filter the table in table view?
this could be caused by the list not being buffered.
So whatever step you're referencing with "List", make sure to wrap it into a List.Buffer-function.
Imke Feldmann (The BIccountant)
If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!
How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries
I would like to know how many items a buffered list can perform well before starting to get slow when called inside a "List.Contains" function. I tried with about 5k items and it seems good, but not with 350K items.
I usually use buffered lists to filter values in tables with codes like this:
If List.Contains(list, [field]) then true else false
Thanks in advance.
Hi @darentengdrake ,
Are you using a direct query mode?Why not filter the table in table view?
I am using import mode. I could do the filter in table view using LOOKUPVALUE. Is there a difference between the filter in table view vs query mode in terms of performance and speed?
Daren
Hi @darentengdrake ,
If you use M query for calculation,it will only keep the last data after being filtered,and the calculation will be executed in each rows.But if you choose to filter in table view,it wont destroy the data integrity,1.6mil rows of data will still stored in memory and you can first make a filter to pick out the data you need for calculation,no need to execute it in each rows,which will shorten the time for calculation.
However,if it's a need to do a M query,better refer to @ImkeF 's suggestion,this will also help to reduce the time.
Likely having to do some kind of table scanning. @ImkeF ?
Check out the October 2024 Power BI update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.
User | Count |
---|---|
113 | |
96 | |
91 | |
82 | |
69 |
User | Count |
---|---|
159 | |
125 | |
116 | |
111 | |
95 |