Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hey guys,
I have a sheet with "Processess movement" (I have no idea how you call it in english, so I just free translated it :-)".
So, in each row of my sheet I have:
num of the protocol | department (that's sending the document) | date/time | department (that's recieving the document)| date/time
Therfore, in order to find the time each department spends with their work, I have to subtract the red value (see image) with the blue value(see image) only if the yellow values (see image) are equal.
I found a solution with:
if [num_protocolo] = indexcreated[num_protocolo]{[Index]+1} then indexcreated[data_envio_origem]{[Index]+1}-[data_recebimento_destino] else 0),
But now everytime I apply my changes, it loads 20GB of data. My file has 16k rows, but it only has 1.7Mb, I don't know what´s happening.
If you could bring me a measure instead of a new column I would appreciate!!!
Image: https://prnt.sc/kve0km
Solved! Go to Solution.
Hi @joaocarlos95,
Based on my test, you could refer to below steps:
Add group index in query editor:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("hY9bCgMxCEW3EvI9EK+PTOJWhux/G00KTRnGUvDr6PHqdeXTulI+MhNaIRTSRN0FThJR0TyOwGoJ1Y2c66ZSwAnkWn3OhtZ7q87STXU2Fp27qP3MMrfukEfWsuq2cM/C8y+c6262dXdkoX37N5qo/dEQafyhY7wA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Id = _t, #"Time(send)" = _t, #"Time(recive)" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Id", Int64.Type}, {"Time(send)", type datetime}, {"Time(recive)", type datetime}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Id"}, {{"all data", each Table.AddIndexColumn(_, "Index",1,1), type table}}),
#"Expanded all data" = Table.ExpandTableColumn(#"Grouped Rows", "all data", {"Time(send)", "Time(recive)", "Index"}, {"all data.Time(send)", "all data.Time(recive)", "all data.Index"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded all data",{{"all data.Time(send)", "Time(send)"}, {"all data.Time(recive)", "Time(recive)"}, {"all data.Index", "Index"}})
in
#"Renamed Columns"
Result:
Apply it and create two measures:
previous recive time per ID = CALCULATE(SELECTEDVALUE(Table1[Time(recive)]),FILTER(ALLEXCEPT(Table1,Table1[Id]),Table1[Index]=MAX(Table1[Index])-1))
time diff = DATEDIFF(SELECTEDVALUE(Table1[Time(send)]),[previous recive time per ID],MINUTE)
Result:
You could also download the pbix file to have a view.
Regards,
Daniel He
Hey guys,
I have a sheet with "Processess movement" (I have no idea how you call it in english, so I just free translated it :-)".
So, in each row of my sheet I have: num of the protocol | department (that's sending the document) | date/time | department (that's recieving the document)| date/time
Therfore, in order to find the time each department spends with their work, I have to subtract the red value (see image) with the blue value(see image) only if the yellow values (see image) are equal.
I found a solution with:
if [num_protocolo] = indexcreated[num_protocolo]{[Index]+1} then indexcreated[data_envio_origem]{[Index]+1}-[data_recebimento_destino] else 0)
But now everytime I apply my changes, it loads 20GB of data. My file has 16k rows, but it only has 1.7Mb, I don't know what´s happening. If you could bring me a measure instead of a new column I would appreciate!!!
Image: https://prnt.sc/kve0km
Perhaps try it in DAX using EARLIER
See my article on Mean Time Before Failure (MTBF) which uses EARLIER: http://community.powerbi.com/t5/Community-Blog/Mean-Time-Between-Failure-MTBF-and-Power-BI/ba-p/3395...
Thank you for answering Greg, I don't know if you're a bot :-).
I was trying to use EARLIER, but I'm having a difficult time working with row contex, for example, I tried this:
Measure = SUMX(FILTER(dProcessos;dProcessos[ID]=EARLIER(dProcessos[ID]));dProcessos[data_envio_origem]-EARLIER(dProcessos[data_recebimento_destino])
But I get that error in the second "EARLIER" that says that EARLIER makes reference to a row contenx that doesn't exist. I'm gonna read your article right now to see if I can finally understand EARLIER.
BTW, since you're already here :-), do you know why my file increases to 20GB when I use that M script? That script worked so bautifuly that I wanted to use it, but it takes 4 hours to apply my changes.
Hi @joaocarlos95,
Based on my test, you could refer to below steps:
Sample data:
Create a measure:
Measure = var M=CALCULATE(MAX('Table1'[Time(send)]),FILTER(ALL('Table1'),'Table1'[Id]=MAX('Table1'[Id])))
var N=CALCULATE(MIN('Table1'[Time(recive)]),FILTER(ALL('Table1'),'Table1'[Id]=MAX('Table1'[Id])))
return IF(M=MAX('Table1'[Time(send)]),DATEDIFF(M,N,DAY),0)
Result:
You could also download the pbix file to have a view.
Regards,
Daniel He
Hey Daniel, thank you for answering. Unfortunately it didn't work on my case because I need to subtract from the previously row and there's another issue that in a lot of cases I have 30 rows with the same ID.
However, I really like your formula and I'm using it in another data :-).
Hi @joaocarlos95,
Based on my test, you could refer to below steps:
Add group index in query editor:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("hY9bCgMxCEW3EvI9EK+PTOJWhux/G00KTRnGUvDr6PHqdeXTulI+MhNaIRTSRN0FThJR0TyOwGoJ1Y2c66ZSwAnkWn3OhtZ7q87STXU2Fp27qP3MMrfukEfWsuq2cM/C8y+c6262dXdkoX37N5qo/dEQafyhY7wA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Id = _t, #"Time(send)" = _t, #"Time(recive)" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Id", Int64.Type}, {"Time(send)", type datetime}, {"Time(recive)", type datetime}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Id"}, {{"all data", each Table.AddIndexColumn(_, "Index",1,1), type table}}),
#"Expanded all data" = Table.ExpandTableColumn(#"Grouped Rows", "all data", {"Time(send)", "Time(recive)", "Index"}, {"all data.Time(send)", "all data.Time(recive)", "all data.Index"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded all data",{{"all data.Time(send)", "Time(send)"}, {"all data.Time(recive)", "Time(recive)"}, {"all data.Index", "Index"}})
in
#"Renamed Columns"
Result:
Apply it and create two measures:
previous recive time per ID = CALCULATE(SELECTEDVALUE(Table1[Time(recive)]),FILTER(ALLEXCEPT(Table1,Table1[Id]),Table1[Index]=MAX(Table1[Index])-1))
time diff = DATEDIFF(SELECTEDVALUE(Table1[Time(send)]),[previous recive time per ID],MINUTE)
Result:
You could also download the pbix file to have a view.
Regards,
Daniel He
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 38 | |
| 38 | |
| 36 | |
| 28 | |
| 28 |
| User | Count |
|---|---|
| 124 | |
| 88 | |
| 74 | |
| 66 | |
| 65 |