Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
EMKody
New Member

Data not passing through dataflow

Hi all,

 

Thanks in advance for any help. I am new to PQ and have been trying to learn it as I go! Currently I am using a dataflow to grab two tables from Salesforce. After they are in PQ and I clean out the unwanted columns, I use a merge query to put everything into a single table. In PQ I can see on the table that there are multiple rows for certain entries that have multiple related records from the table that I have merged, and this is the desired outcome. The problem that I am facing is that when I pull that table up in a solution, there is only a single row for each entity, and any extra ones are truncated from the table. I am happy to provide any additional information that would help in troubleshooting!

 

Example of entities with multiple records. If I pull this up in the table, only a single entity exists.Example of entities with multiple records. If I pull this up in the table, only a single entity exists.

 

 

 

Thanks,

Kody

1 ACCEPTED SOLUTION
BA_Pete
Super User
Super User

Hi Kody,

 

Table visuals in Power BI automatically aggregate similar values to the fewest possible rows within the data context.

In your example, if you put the column that contains "HBA of Winton-Salem" (let's call it [Column1]) into a table visual, you would just get a single row with that value in it. However, if you then added one of the columns from the merged table that has a different value for each instance of "HBA of Winton-Salem" ([Column2]), the table visual will duplicate "HBA of Winton-Salem" in order to display all the different [Column1] : [Column2] combinations.

This behaviour is by design and makes perfect sense in all but the most edge-case of reporting scenarios.

 

As an aside, merging tables to create larger tables with duplicated rows/values in this way is inefficient in processing resources, file size, and model responsiveness. The most efficient way to handle this scenario would be to send both tables to the data model unmerged, then create a relationship between the two tables on the field that you previously merged them on (it looks like this relationship would default to Table1 ONE : MANY Table2).

This will give you the same functionality and usability of the data, but will speed up refresh times by removing the processing overhead required for the merge, reduce your file size by removing duplicated values due to the merge, and improve model response times by Power BI being able to scan smaller tables.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




View solution in original post

2 REPLIES 2
BA_Pete
Super User
Super User

Hi Kody,

 

Table visuals in Power BI automatically aggregate similar values to the fewest possible rows within the data context.

In your example, if you put the column that contains "HBA of Winton-Salem" (let's call it [Column1]) into a table visual, you would just get a single row with that value in it. However, if you then added one of the columns from the merged table that has a different value for each instance of "HBA of Winton-Salem" ([Column2]), the table visual will duplicate "HBA of Winton-Salem" in order to display all the different [Column1] : [Column2] combinations.

This behaviour is by design and makes perfect sense in all but the most edge-case of reporting scenarios.

 

As an aside, merging tables to create larger tables with duplicated rows/values in this way is inefficient in processing resources, file size, and model responsiveness. The most efficient way to handle this scenario would be to send both tables to the data model unmerged, then create a relationship between the two tables on the field that you previously merged them on (it looks like this relationship would default to Table1 ONE : MANY Table2).

This will give you the same functionality and usability of the data, but will speed up refresh times by removing the processing overhead required for the merge, reduce your file size by removing duplicated values due to the merge, and improve model response times by Power BI being able to scan smaller tables.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




Hi Pete,

 

I really appreciate the feedback. I had originally thought it would be easiest to do the merge since we are bringing very little date in from the second table, but you explained it really well. This is how I had originally tried doing it, but need to gain more knowledge on working with relationships. Merging the table was my "easy workaround", but I see now I was using ducktape to plug the holes on my boat! Thanks again for taking the time to write out a detailed response!

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.