Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi All,
I need your help in how to improve the performance of a query that is handling huge data.
Here, I am having a data that is more that 7 million rows. Within this data there is a column which has multiple text values and we want to do some analysis on this information. So initially we pivoted the data and this generated more than 400 new columns.
We are now trying to build some reports on this data, and the problem started here.
The data loading and visual refresh time increased tremendously and consuming all my resources.
I also ran a small query to capture only limited columns for creating the report, but the issue is still the same.
So, I request you to help me with some ideas on how I can handle this issue and improve the performance.
Note: I am getting this data from SQL Server.
Thanks,
Akash
Pivoting the data is usually not a good idea (especially when it results in 400 more columns!). I would keep it unpivoted and post here on the community for help on the analysis of the unpivoted data.
Pat
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 40 | |
| 35 | |
| 34 | |
| 31 | |
| 27 |
| User | Count |
|---|---|
| 135 | |
| 102 | |
| 67 | |
| 65 | |
| 56 |