Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers!
Enter the sweepstakes now!Prepping for a Fabric certification exam? Join us for a live prep session with exam experts to learn how to pass the exam. Register now.
Hi All,
I need your help in how to improve the performance of a query that is handling huge data.
Here, I am having a data that is more that 7 million rows. Within this data there is a column which has multiple text values and we want to do some analysis on this information. So initially we pivoted the data and this generated more than 400 new columns.
We are now trying to build some reports on this data, and the problem started here.
The data loading and visual refresh time increased tremendously and consuming all my resources.
I also ran a small query to capture only limited columns for creating the report, but the issue is still the same.
So, I request you to help me with some ideas on how I can handle this issue and improve the performance.
Note: I am getting this data from SQL Server.
Thanks,
Akash
Pivoting the data is usually not a good idea (especially when it results in 400 more columns!). I would keep it unpivoted and post here on the community for help on the analysis of the unpivoted data.
Pat
Check out the April 2025 Power BI update to learn about new features.
Explore and share Fabric Notebooks to boost Power BI insights in the new community notebooks gallery.
User | Count |
---|---|
72 | |
67 | |
65 | |
42 | |
42 |
User | Count |
---|---|
46 | |
40 | |
28 | |
27 | |
26 |