Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredJoin us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.
Hello All!
I am working on a model to track the activities completed by our service team to see whether they are being over/underworked. This is done by taking data from our system and using a query to link it to Excel. The model is complete but I am somewhat new to using PowerQuery/Pivot and I don't believe that it is optimized enough.
I need to start tacking on historical information so that I can compare month to month information. Currently I am just increasing the size of the source data every month in order to eliminate extra connections (Query starts as Jan 1 - Jan 31, next month becomes Jan 1 - Feb 28). My concern with this method is that each month contains around 10,000 - 15,000 rows of data. Currently the model is floating around 20,000 lines of data in the query, and it takes just over 7 minutes to refresh the workbook, and I'm worried that this will increase exponentially as the months go on.
Is there a more efficient way to query this type of data in general? I know that Excel is supposed to be able to handle millions of rows of data, but I'm not sure if it can handle this in queries. I am more than happy to post a copy of the workbook if the inefficiencies involve my query steps themselves putting a strain on the workbook.
Thank you in advance for your help!
Hi @Anonymous . Do you have access to Power BI? There is an option to incrementally load data to avoid re-query every day since the being of time.
| User | Count |
|---|---|
| 8 | |
| 6 | |
| 3 | |
| 3 | |
| 3 |