Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreGet certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now
Hi.
I have a question about how to handle some data in Power Query.
In Situation 1 I have the data disaggregated, for example, for Type A I have 3 lines with contiguous intervals, I need these 3 lines to be transformed into a single line as exposed in Situation 2.
Situation 1 | ||||
ID | Type | Start | End | Interval |
1 | A | 0 | 10 | 10 |
2 | A | 10 | 20 | 10 |
3 | A | 20 | 30 | 10 |
4 | B | 40 | 50 | 10 |
5 | B | 50 | 60 | 10 |
Situation 2 | ||||
ID | Type | Start | End | Interval |
1 | A | 0 | 30 | 30 |
2 | B | 40 | 60 | 20 |
Solved! Go to Solution.
Hi @ThiagoRosa ,
Try below codes:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXIEYgMgNoQSsTrRSkZQcSOQmDGShDFUAixmgiRhAmQ6wcRMkSRMoRJgMTOYRCwA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Type = _t, Start = _t, End = _t, Interval = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"Type", type text}, {"Start", Int64.Type}, {"End", Int64.Type}, {"Interval", Int64.Type}}),
Custom1 = Table.TransformRows(
#"Changed Type",
each let
mid = Table.Max(Table.SelectRows(#"Changed Type",(x)=>x[Type]=[Type]),"ID")[ID],
res =
if [ID]<>mid
then let nextrow = Table.SelectRows(#"Changed Type",(y)=>y[ID]=[ID]+1) in
if [Start]+[Interval]=List.Sum(nextrow[Start])
then let tab = Table.SelectRows(#"Changed Type",(z)=>z[ID]=[ID] or z[ID]=[ID]+1) in
[ID=[ID],Type=[Type],Start=List.Min(tab[Start]),End=List.Max(tab[End]),Interval=List.Sum(tab[Interval])]
else _
else [ID=null,Type=null,Start=null,End=null,Interval=null]
in
res
),
#"Converted to Table" = Table.FromList(Custom1, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"ID", "Type", "Start", "End", "Interval"}, {"ID", "Type", "Start", "End", "Interval"}),
#"Filtered Rows" = Table.SelectRows(#"Expanded Column1", each ([ID] <> null)),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"ID"}),
#"Added Index" = Table.AddIndexColumn(#"Removed Columns", "ID", 1, 1, Int64.Type),
#"Reordered Columns" = Table.ReorderColumns(#"Added Index",{"ID", "Type", "Start", "End", "Interval"}),
#"Changed Type1" = Table.TransformColumnTypes(#"Reordered Columns",{{"ID", Int64.Type}, {"Type", type text}, {"Start", Int64.Type}, {"End", Int64.Type}})
in
#"Changed Type1"
And you will see:
You could also do it in dax,create 4 columns as below:
Column =
var _nextstart=CALCULATE(MAX('Table (2)'[Start]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])+1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
var _nextend=CALCULATE(MAX('Table (2)'[End]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])+1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
var _previousstart=CALCULATE(MAX('Table (2)'[Start]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])-1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
var _previousend=CALCULATE(MAX('Table (2)'[End]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])-1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
Return
IF(_nextstart<>BLANK()||_nextend<>BLANK(),
IF('Table (2)'[Start]=_nextstart-'Table (2)'[Interval]&&'Table (2)'[End]=_nextend-'Table (2)'[Interval],1,0),
IF(_nextstart=BLANK()||_nextend=BLANK(),
IF('Table (2)'[Start]=_previousstart+'Table (2)'[Interval]&&'Table (2)'[End]=_previousend+'Table (2)'[Interval],1,0)))
_Start =
CALCULATE(MIN('Table (2)'[Start]),FILTER('Table (2)','Table (2)'[Column]=EARLIER('Table (2)'[Column])&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
_End =
CALCULATE(MAX('Table (2)'[End]),FILTER('Table (2)','Table (2)'[Column]=EARLIER('Table (2)'[Column])&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
_Interval = CALCULATE(SUM('Table (2)'[Interval]),FILTER('Table (2)','Table (2)'[Column]=EARLIER('Table (2)'[Column])&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
And you will see:
For the related .pbix file,pls see attached.
Best Regards,
Kelly
Did I answer your question? Mark my post as a solution!
Thank you @Vera_33 .
The proposed solution is a good one. However, I checked another situation in my database that prevents it from being applied. There are data of the same type but discontinuous.
Below I present the new situation.
Situation 1 | ||||
ID | Type | Start | End | Interval |
1 | A | 0 | 10 | 10 |
2 | A | 20 | 30 | 10 |
3 | A | 30 | 40 | 10 |
4 | B | 40 | 50 | 10 |
5 | B | 50 | 60 | 10 |
Situation 2 | ||||
ID | Type | Start | End | Interval |
1 | A | 0 | 10 | 10 |
2 | A | 20 | 40 | 20 |
3 | B | 40 | 60 | 20 |
Can you help me?
try this.
You have only to add a column interval ([End]-[Start])
let
Origine = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("Tc67DcAwCEXRXahdGH+VMlkDsf8aBvIsUWChewojQkyFXptqw3i0CDX05q0n6IBoI8Gw9bttJpiAaCvBAmxvT/1/c9iAaHxPUD0=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Type = _t, Start = _t, End = _t, Interval = _t]),
#"Modificato tipo" = Table.TransformColumnTypes(Origine,{{"Start", Int64.Type}, {"End", Int64.Type}, {"Interval", Int64.Type}}),
#"Raggruppate righe" = Table.Group(#"Modificato tipo", {"Type", "Start", "End"}, {{"all", each [Start=List.First([Start]),End=List.Last([End])]}}, GroupKind.Local,(x,y)=>Number.From(x[Type]<>y[Type]or y[Start]-x[End]<>0)),
#"Rimosse colonne" = Table.RemoveColumns(#"Raggruppate righe",{"Start", "End"}),
#"Tabella all espansa" = Table.ExpandRecordColumn(#"Rimosse colonne", "all", {"Start", "End"})
in
#"Tabella all espansa"
this should be complete if I understand what you are looking for
let
Origine = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("Tc67DcAwCEXRXahdGH+VMlkDsf8aBvIsUWChewojQkyFXptqw3i0CDX05q0n6IBoI8Gw9bttJpiAaCvBAmxvT/1/c9iAaHxPUD0=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Type = _t, Start = _t, End = _t, Interval = _t]),
#"Modificato tipo" = Table.TransformColumnTypes(Origine,{{"Start", Int64.Type}, {"End", Int64.Type}, {"Interval", Int64.Type}}),
#"Raggruppate righe" = Table.Group(#"Modificato tipo", {"Type", "Start", "End"}, {{"all", each [Start=List.First([Start]),End=List.Last([End]), Interval=List.Sum([Interval])]}}, GroupKind.Local,(x,y)=>Number.From(x[Type]<>y[Type]or y[Start]-x[End]<>0)),
#"Rimosse colonne" = Table.RemoveColumns(#"Raggruppate righe",{"Start", "End"}),
#"Tabella all espansa" = Table.ExpandRecordColumn(#"Rimosse colonne", "all", {"Start", "End","Interval"})
in
#"Tabella all espansa"
@Anonymous thanks.
I will try this solution. I will give feedback later.
Dear @Anonymous thanks.
This solution is perfect.
Hi @ThiagoRosa ,
Try below codes:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXIEYgMgNoQSsTrRSkZQcSOQmDGShDFUAixmgiRhAmQ6wcRMkSRMoRJgMTOYRCwA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Type = _t, Start = _t, End = _t, Interval = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"Type", type text}, {"Start", Int64.Type}, {"End", Int64.Type}, {"Interval", Int64.Type}}),
Custom1 = Table.TransformRows(
#"Changed Type",
each let
mid = Table.Max(Table.SelectRows(#"Changed Type",(x)=>x[Type]=[Type]),"ID")[ID],
res =
if [ID]<>mid
then let nextrow = Table.SelectRows(#"Changed Type",(y)=>y[ID]=[ID]+1) in
if [Start]+[Interval]=List.Sum(nextrow[Start])
then let tab = Table.SelectRows(#"Changed Type",(z)=>z[ID]=[ID] or z[ID]=[ID]+1) in
[ID=[ID],Type=[Type],Start=List.Min(tab[Start]),End=List.Max(tab[End]),Interval=List.Sum(tab[Interval])]
else _
else [ID=null,Type=null,Start=null,End=null,Interval=null]
in
res
),
#"Converted to Table" = Table.FromList(Custom1, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"ID", "Type", "Start", "End", "Interval"}, {"ID", "Type", "Start", "End", "Interval"}),
#"Filtered Rows" = Table.SelectRows(#"Expanded Column1", each ([ID] <> null)),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows",{"ID"}),
#"Added Index" = Table.AddIndexColumn(#"Removed Columns", "ID", 1, 1, Int64.Type),
#"Reordered Columns" = Table.ReorderColumns(#"Added Index",{"ID", "Type", "Start", "End", "Interval"}),
#"Changed Type1" = Table.TransformColumnTypes(#"Reordered Columns",{{"ID", Int64.Type}, {"Type", type text}, {"Start", Int64.Type}, {"End", Int64.Type}})
in
#"Changed Type1"
And you will see:
You could also do it in dax,create 4 columns as below:
Column =
var _nextstart=CALCULATE(MAX('Table (2)'[Start]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])+1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
var _nextend=CALCULATE(MAX('Table (2)'[End]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])+1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
var _previousstart=CALCULATE(MAX('Table (2)'[Start]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])-1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
var _previousend=CALCULATE(MAX('Table (2)'[End]),FILTER('Table (2)','Table (2)'[ID]=EARLIER('Table (2)'[ID])-1&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
Return
IF(_nextstart<>BLANK()||_nextend<>BLANK(),
IF('Table (2)'[Start]=_nextstart-'Table (2)'[Interval]&&'Table (2)'[End]=_nextend-'Table (2)'[Interval],1,0),
IF(_nextstart=BLANK()||_nextend=BLANK(),
IF('Table (2)'[Start]=_previousstart+'Table (2)'[Interval]&&'Table (2)'[End]=_previousend+'Table (2)'[Interval],1,0)))
_Start =
CALCULATE(MIN('Table (2)'[Start]),FILTER('Table (2)','Table (2)'[Column]=EARLIER('Table (2)'[Column])&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
_End =
CALCULATE(MAX('Table (2)'[End]),FILTER('Table (2)','Table (2)'[Column]=EARLIER('Table (2)'[Column])&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
_Interval = CALCULATE(SUM('Table (2)'[Interval]),FILTER('Table (2)','Table (2)'[Column]=EARLIER('Table (2)'[Column])&&'Table (2)'[Type]=EARLIER('Table (2)'[Type])))
And you will see:
For the related .pbix file,pls see attached.
Best Regards,
Kelly
Did I answer your question? Mark my post as a solution!
@ThiagoRosa - I don't undersand your expected results.
Please explain the logic of how table two comes about.
How to get good help fast. Help us help you.
How To Ask A Technical Question If you Really Want An Answer
How to Get Your Question Answered Quickly - Give us a good and concise explanation
How to provide sample data in the Power BI Forum - Provide data in a table format per the link, or share an Excel/CSV file via OneDrive, Dropbox, etc.. Provide expected output using a screenshot of Excel or other image. Do not provide a screenshot of the source data. I cannot paste an image into Power BI tables.
DAX is for Analysis. Power Query is for Data Modeling
Proud to be a Super User!
MCSA: BI Reporting@edhans I will try to be more specific.
My original database corresponds to the road sections located in Brazil. Highways are named with the prefix "BR" associated with a corresponding number such as "101", for example. In addition, they have a beginning and an end. In my example, I assign the highway code as "Type", the start mileage as "start" and the end as "end". The "interval" corresponds to the difference between the "star" and the "end".
The original database is not well organized and I need to organize it so that it can generate maps.
In some situations, like the one described in my original message, I have two continuous excerpts in two rows from my table and, therefore, I need to combine them into a single record.
In the example I used, in situation 1, ID 3 was joined with ID 2 because it was a continuous stretch, that is, the "star" of ID 3 corresponds to the "end" of ID 2. Thus, ID 2 and ID 3 were merged, in Situation 2, with a new ID 2.
Something similar happened with ID 4 and 5. Since these are continuous stretches, they were condensed, in Situation 2, with a new ID 3.
Sincerely
I've played with it and still not sure I understand.
If 2 ends at 30, and 3 starts at 30, I get the connection, but since 3 ends at 40 and 4 starts at 40, why isn't that a continuation? Why did you start over with that part?
DAX is for Analysis. Power Query is for Data Modeling
Proud to be a Super User!
MCSA: BI ReportingHi @ThiagoRosa
You can GROUPBY ID, paste it in Advanced Editor
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXIEYgMgNoQSsTrRSkZQcbCYEZKEMVQCLGaMJGECZDoBsQlIzBRJwhQqARYzg0nEAgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Type = _t, Start = _t, End = _t, Interval = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"Type", type text}, {"Start", Int64.Type}, {"End", Int64.Type}, {"Interval", Int64.Type}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Type"}, {{"Start", each List.Min([Start]), type nullable number}, {"End", each List.Max([End]), type nullable number}, {"Interval", each List.Sum([Interval]), type nullable number}})
in
#"Grouped Rows"
Starting December 3, join live sessions with database experts and the Fabric product team to learn just how easy it is to get started.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early Bird pricing ends December 9th.
User | Count |
---|---|
25 | |
12 | |
12 | |
11 | |
8 |
User | Count |
---|---|
42 | |
27 | |
14 | |
14 | |
12 |