Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hello all,
So I have a table called tMaster with a few pieces of information.
ZIP | Origin | Sub Destination | ||
12345 | 175 | |||
12345 | 960 | 1963 | ||
45678 | 511 | |||
67890 | 175 | |||
67890 | 511 | |||
67890 | 960 | 1963 |
So what this means is that ZIPs 12345 comes from 175 and 1963 (not 960), 45678 comes from 511, and 67890 comes from 175, 511, and 1963. Basically, if it has a subdestination then it overrides the Origin.
I want to make another column in tMaster that looks like this
ZIP | Origin | Sub Destination | Origin(s) | |||
12345 | 960 | 1963 | 175 1963 | |||
12345 | 175 | 175 1963 | ||||
45678 | 511 | 511 | ||||
67890 | 511 | 175 511 1963 | ||||
67890 | 175 | 175 511 1963 | ||||
67890 | 960 | 1963 | 175 511 1963 |
If there's a way to make sure that the new column has them in numerical order, that would help a lot but it's not a big deal if not.
Normally I'd just make something like Origin(s) = [Origin] & " " & [Sub Destination" but because the origins could be in either column I'm not sure what to do.
Solved! Go to Solution.
Sure, you can do that.
To make it as understandable solution for long term:
a) Table (orig) - data (original) table. You can disable "Enable load"
b) Table (2) - like I said, temp table. You can disable "Enable load"
c) Table - Final reporting table.
Just sharing my 2 cents. It all depends on your needs. Disable or Enable load is purely on your requirements. In my view, temp table / intermediate steps has to be disabled load.
=======================
You can do all in the same table too, don't mistake as it is not possible.
One table method, FYI:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMjQyNjFV0lEyNAeRCkqxOggxSzMDkIylmTFY2MTUzNwCKGBqaAhXChSxNEDTDhPDpg7ZyFgA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ZIP = _t, Origin = _t, #"Sub Destination" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ZIP", type text}, {"Origin", type text}, {"Sub Destination", type text}}),
#"Trimmed Text" = Table.TransformColumns(#"Changed Type",{{"Origin", Text.Trim, type text}, {"Sub Destination", Text.Trim, type text}}),
#"Added Custom" = Table.AddColumn(#"Trimmed Text", "Origin 2", each if ([Sub Destination] = "") then [Origin] else [Sub Destination], type nullable text),
#"Grouped Rows1" = Table.Group(#"Added Custom", {"ZIP"}, {{"Origin List", each Text.Combine([Origin 2], " "), type nullable text}}),
#"Merged Queries" = Table.NestedJoin(#"Grouped Rows1", {"ZIP"}, #"Trimmed Text", {"ZIP"}, "Grouped Rows1", JoinKind.LeftOuter),
#"Expanded Grouped Rows1" = Table.ExpandTableColumn(#"Merged Queries", "Grouped Rows1", {"Origin", "Sub Destination"}, {"Origin", "Sub Destination"}),
#"Reordered Columns" = Table.ReorderColumns(#"Expanded Grouped Rows1",{"ZIP", "Origin", "Sub Destination", "Origin List"})
in
#"Reordered Columns"
In long term, we may miss or alter steps like doing self join or previous steps join and lose the query easily. Hence recommended two/three tables method.
Depends on your needs
Hi @Anonymous ,
You can also try to create new columns with DAX.
col_1 = IF(ISBLANK('Table'[Sub Destination]),'Table'[Origin],'Table'[Sub Destination])
col_2 = CONCATENATEX(FILTER('Table',EARLIER('Table'[ZIP])='Table'[ZIP]),'Table'[col_1]," ")
Best Regards,
Liang
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
It can be done in Power query as below:
Table :
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMjQyNjFV0lEyNAeRCkqxOggxSzMDkIylmTFY2MTUzNwCKGBqaAhXChSxNEDTDhPDpg7ZyFgA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ZIP = _t, Origin = _t, #"Sub Destination" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ZIP", type text}, {"Origin", type text}, {"Sub Destination", type text}}),
#"Added Custom" = Table.AddColumn(#"Changed Type", "Origin 2", each if ([Sub Destination] = "") then [Origin] else [Sub Destination], type nullable text),
#"Merged Queries" = Table.NestedJoin(#"Added Custom", {"ZIP"}, #"Table (2)", {"ZIP"}, "Table (2)", JoinKind.LeftOuter),
#"Expanded Table (2)" = Table.ExpandTableColumn(#"Merged Queries", "Table (2)", {"Origin List"}, {"Origin List"}),
#"Removed Columns" = Table.RemoveColumns(#"Expanded Table (2)",{"Origin 2"})
in
#"Removed Columns"
Table (2):
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMjQyNjFV0lEyNAeRCkqxOggxSzMDkIylmTFY2MTUzNwCKGBqaAhXChSxNEDTDhPDpg7ZyFgA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ZIP = _t, Origin = _t, #"Sub Destination" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ZIP", type text}, {"Origin", type text}, {"Sub Destination", type text}}),
#"Trimmed Text" = Table.TransformColumns(#"Changed Type",{{"Origin", Text.Trim, type text}, {"Sub Destination", Text.Trim, type text}}),
#"Added Custom" = Table.AddColumn(#"Trimmed Text", "Origin 2", each if ([Sub Destination] = "") then [Origin] else [Sub Destination], type nullable text),
#"Grouped Rows1" = Table.Group(#"Added Custom", {"ZIP"}, {{"Origin List", each Text.Combine([Origin 2], " "), type nullable text}})
in
#"Grouped Rows1"
Table (2) is not part of the data load, kind of temp table for the needs. You can disable (uncheck) the "Enable load"
After my meeting I will go ahead and try to decipher this and get back to you, but thank you so far!
I have less experience in Query Editor so give me some time to make sense of this!
First question is do I make Table(2) first before editing Table?
Sure, you can do that.
To make it as understandable solution for long term:
a) Table (orig) - data (original) table. You can disable "Enable load"
b) Table (2) - like I said, temp table. You can disable "Enable load"
c) Table - Final reporting table.
Just sharing my 2 cents. It all depends on your needs. Disable or Enable load is purely on your requirements. In my view, temp table / intermediate steps has to be disabled load.
=======================
You can do all in the same table too, don't mistake as it is not possible.
One table method, FYI:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMjQyNjFV0lEyNAeRCkqxOggxSzMDkIylmTFY2MTUzNwCKGBqaAhXChSxNEDTDhPDpg7ZyFgA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ZIP = _t, Origin = _t, #"Sub Destination" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ZIP", type text}, {"Origin", type text}, {"Sub Destination", type text}}),
#"Trimmed Text" = Table.TransformColumns(#"Changed Type",{{"Origin", Text.Trim, type text}, {"Sub Destination", Text.Trim, type text}}),
#"Added Custom" = Table.AddColumn(#"Trimmed Text", "Origin 2", each if ([Sub Destination] = "") then [Origin] else [Sub Destination], type nullable text),
#"Grouped Rows1" = Table.Group(#"Added Custom", {"ZIP"}, {{"Origin List", each Text.Combine([Origin 2], " "), type nullable text}}),
#"Merged Queries" = Table.NestedJoin(#"Grouped Rows1", {"ZIP"}, #"Trimmed Text", {"ZIP"}, "Grouped Rows1", JoinKind.LeftOuter),
#"Expanded Grouped Rows1" = Table.ExpandTableColumn(#"Merged Queries", "Grouped Rows1", {"Origin", "Sub Destination"}, {"Origin", "Sub Destination"}),
#"Reordered Columns" = Table.ReorderColumns(#"Expanded Grouped Rows1",{"ZIP", "Origin", "Sub Destination", "Origin List"})
in
#"Reordered Columns"
In long term, we may miss or alter steps like doing self join or previous steps join and lose the query easily. Hence recommended two/three tables method.
Depends on your needs