Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hi all,
I am trying to merge two different sets of data into one. Each set of data represents 12 months of data (actual and estimates) for a given month. I take the latest two sets of files, create a serogate key based on several columns, then do a full outer join on the key for both sets of data to make one long row. Once I have this long row, I check to see if I have data in the newer file. If I don't, I use the prior months values.
The outer join seems to be really slow to load on preview and I'm wondering if there is a more efficient way to do this. Perhaps a hash key instead of a SK? I'm not sure how I would do that.
Thanks!
Hi @More_BI,
I would prefer DAX to merging queries in your scenario. You already have two big tables, there will be three big tables after merging. You can merge them. That means you can create a relationship between them. Then we can use a DAX formula like this:
RelatedRows = countrows(relatedtable(table2))
If [RelatedRows] = 0, there is no rows in the other table. If you can share some sample data, we would get more accurate formula.
Besides, can you try to merge them by column vs column which isn't a combined column. Something like this:
To be honest, I didn't find out a difference due to my dataset isn't big enough. Could you please try and share the result?
Best Regards!
Dale
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Power BI update to learn about new features.