Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredJoin us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.
Hello All, I am making a report for my user to compare the results of "runs" which are generated from modelling software. I am currently using SQL Direct query as the run data is stored in SQL, and I wanted new runs to be automatically added to the report without having to manually update each time a new run is created. I am using lots of slicers to filter the data and I feel the report slow down for each new query which is made, and can only imagine this getting worse and worse as new runs get added to the table in the future. I was wondering what you folks would use as workarounds in this situation, but these are my first approaches: - Have a single "apply changes" button for all my slicers so I can change lots of slicers and then press apply and only run one query, rather than one every time something is selected / deselected. I know there is the option to add "apply" buttons in options /query reduction but this creates one for each slicer which appears very messy, and annoying to use. - Have a dashboard where the user can select runs they wish to compare, then have a power bi query generate a table, either in the SQL database, or in Power BI which is a subset of the large table, only including data from these runs. Then have my visuals / filters be applied on this smaller table which will presumably be queried a lot quicker. I also had a question of speed that I was unable to test myself but should be widely known - If I have a table which is 100 million rows, and a column of this table called runid which goes from 1 to 100 which 1 million entrys each , and a table for each run which only had the 1 million row for each. If I wanted to run a specific query for just runID 1 on power bi using slicers to select date / region / statistic attributes, would it be consistently quicker to query the smaller table with just a million entries , or to add the filter to my page in power bi to only select runID values of 1. I know they will be returning the exact same results, but I was guessing the latter would take longer as it was searching a larger table for runID values of 1. Thanks for reading! If you have any insights / tips to get my queries running quicker that would be greatly appreciated. Thanks, EHa
I have some DAX Performance Tuning articles here:
https://community.powerbi.com/t5/Community-Blog/Performance-Tuning-DAX-Part-1/ba-p/976275
https://community.powerbi.com/t5/Community-Blog/Performance-Tuning-DAX-Part-2/ba-p/976813
These are also good articles:
https://maqsoftware.com/expertise/powerbi/dax-best-practices
https://www.sqlgene.com/2019/09/27/a-comprehensive-guide-to-power-bi-performance-tuning/
Video: https://www.sqlbi.com/tv/optimizing-analyzing-dax-query-plans-sqlbits-xii/#
https://docs.microsoft.com/en-us/power-bi/guidance/power-bi-optimization
Sorry for the link spam but it's a broad topic and the information you have provided is limited.
Thanks for the links @Greg_Deckler- there was some useful stuff in there, but nothing I found directly helpful for my current dilemma as I am not using any long ,custom written queries at the moment.
Is there anything in particular you think I should add / explain clearer to get more specific advice for my problem?
@EHa Check out aggregation tables. They are specifically designed to prevent Direct Query from having to directly hit large tables for common aggregations.
https://docs.microsoft.com/en-us/power-bi/transform-model/desktop-aggregations
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Power BI update to learn about new features.
| User | Count |
|---|---|
| 76 | |
| 37 | |
| 31 | |
| 29 | |
| 26 |