Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Hi All,
I need your help in how to improve the performance of a query that is handling huge data.
Here, I am having a data that is more that 7 million rows. Within this data there is a column which has multiple text values and we want to do some analysis on this information. So initially we pivoted the data and this generated more than 400 new columns.
We are now trying to build some reports on this data, and the problem started here.
The data loading and visual refresh time increased tremendously and consuming all my resources.
I also ran a small query to capture only limited columns for creating the report, but the issue is still the same.
So, I request you to help me with some ideas on how I can handle this issue and improve the performance.
Note: I am getting this data from SQL Server.
Thanks,
Akash
Pivoting the data is usually not a good idea (especially when it results in 400 more columns!). I would keep it unpivoted and post here on the community for help on the analysis of the unpivoted data.
Pat
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Prices go up Feb. 11th.
Check out the January 2025 Power BI update to learn about new features in Reporting, Modeling, and Data Connectivity.
User | Count |
---|---|
145 | |
87 | |
66 | |
52 | |
45 |
User | Count |
---|---|
215 | |
90 | |
83 | |
66 | |
58 |