Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I have a table named "Employee Snapshot"
It has dimensional attributes like Employee Class, Employee Code, Employee Status etc.
So, I am removing all the dimensional columns and keeping only dimension keys linked between Employee Snapshot & Dimension Tables
Steps done already
- Only have required columns in Snapshot table
- Have filtered only top 10 records
- Disabled multiple settings in Background
- Table.Join instead of Table.NestedJoin isn't working- may be I am unsure how to apply it
Now, my fact tables needs 14 Merge Operations to get 14 Keys instead of 14 Dimensional Values
The problem is that it keeps loading forever whenever I am making any development and is affecting my work massively
Please help- thanks
Solved! Go to Solution.
I received help on Linkedin from Power Query Champion @rick De Groot
Wish I could tag him here
However, his response made me think deep and I redesigned the whole thing more efficiently
This is what he replied
I received help on Linkedin from Power Query Champion @rick De Groot
Wish I could tag him here
However, his response made me think deep and I redesigned the whole thing more efficiently
This is what he replied
You can filter any one value in both the tables which you are merging than go back to the base table and remove filtered row from Steps taken it is one way to improve merging performance,
Proud to be a Super User! |
|
I have tried that- wasn't of much help
@GauravGG there is no way to deny that PQ is slow and not possibly scalable for large dataset. However, make sure you stick to the following settings that might slightly improve the performance (however, I doubt if it would be any good for 14 joins).
Having said that, if it is a data-level task and you have premium capacity, you can bring all those tables to a datamart and use SQL endpoint to write fully qualified sql tables (basically data transformations shift from PQ to SQL). If you have Fabric, options are even wider (Blazing fast Scala, Apache Spark, Spark SQL, TSQL, Python, R).
If this is an anlysis-level task you can achieve this by building a data-model (it is the join that gets committed to the memory without needing explicit join) and writing DAX calculations.
I tried the option you showed to disable Parallel Loading- didn't help much
Also, I am unsure if I understood this part of your response
If this is an anlysis-level task you can achieve this by building a data-model (it is the join that gets committed to the memory without needing explicit join) and writing DAX calculations
Check out the July 2025 Power BI update to learn about new features.
User | Count |
---|---|
74 | |
72 | |
39 | |
28 | |
27 |
User | Count |
---|---|
97 | |
97 | |
58 | |
45 | |
42 |