Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi i am trying to crate 2 Calculated columns:
1) Sprint ID
2) Sprint Time - start from 0 untill velocity is less the 15
i tried this formula but no luck....
| Date | Time | Seconds | Velocity | Acceleration | Index | Sprint Time | Sprint ID |
| 02/03/2019 | 18:19:00 | 3 | 13.51 | 0.01 | 0 | ||
| 02/03/2019 | 18:19:00 | 3 | 13.51 | 0.01 | 1 | ||
| 02/03/2019 | 18:19:00 | 3 | 13.74 | 0.17 | 2 | ||
| 02/03/2019 | 18:19:00 | 4 | 14.06 | 0.35 | 3 | ||
| 02/03/2019 | 18:19:00 | 4 | 13.63 | 0.07 | 4 | ||
| 02/03/2019 | 18:19:00 | 4 | 13.84 | 0.2 | 5 | ||
| 02/03/2019 | 18:19:00 | 4 | 13.98 | 0.24 | 6 | ||
| 02/03/2019 | 18:19:00 | 4 | 14.36 | 0.45 | 7 | ||
| 02/03/2019 | 18:19:00 | 4 | 14.82 | 0.65 | 8 | ||
| 02/03/2019 | 18:19:00 | 4 | 15.16 | 0.73 | 9 | 0 | 1 |
| 02/03/2019 | 18:19:00 | 4 | 15.39 | 0.71 | 10 | 1 | 1 |
| 02/03/2019 | 18:19:00 | 5 | 15.46 | 0.58 | 11 | 2 | 1 |
| 02/03/2019 | 18:19:00 | 5 | 15.46 | 0.58 | 12 | 3 | 1 |
| 02/03/2019 | 18:19:00 | 5 | 15.81 | 0.67 | 13 | 4 | 1 |
| 02/03/2019 | 18:19:00 | 5 | 16.04 | 0.67 | 14 | 5 | 1 |
| 02/03/2019 | 18:19:00 | 5 | 16.29 | 0.68 | 15 | 6 | 1 |
| 02/03/2019 | 18:20:00 | 5 | 17.1 | 0.87 | 16 | 7 | 1 |
| 02/03/2019 | 18:20:00 | 5 | 17.1 | 0.87 | 17 | 8 | 1 |
| 02/03/2019 | 18:20:00 | 5 | 17.06 | 0.62 | 18 | 9 | 1 |
| 02/03/2019 | 18:20:00 | 5 | 17.49 | 0.77 | 19 | 10 | 1 |
| 02/03/2019 | 18:20:00 | 5 | 17.69 | 0.71 | 20 | 11 | 1 |
| 02/03/2019 | 18:20:00 | 6 | 17.67 | 0.52 | 21 | 12 | 1 |
| 02/03/2019 | 18:20:00 | 6 | 17.62 | 0.36 | 22 | 13 | 1 |
| 02/03/2019 | 18:20:00 | 6 | 17.54 | 0.21 | 23 | 14 | 1 |
| 02/03/2019 | 18:20:00 | 6 | 17.73 | 0.29 | 24 | 15 | 1 |
| 02/03/2019 | 18:20:00 | 6 | 17.44 | 0.02 | 25 | 16 | 1 |
| 02/03/2019 | 18:20:00 | 6 | 17.52 | 0.08 | 26 | 17 | 1 |
| 02/03/2019 | 18:20:00 | 6 | 17.45 | 0.01 | 27 | 18 | 1 |
| 02/03/2019 | 18:20:00 | 11 | 10.16 | 0.56 | 28 | ||
| 02/03/2019 | 18:20:00 | 12 | 10.15 | 0.32 | 29 | ||
| 02/03/2019 | 18:20:00 | 12 | 10.29 | 0.34 | 30 | ||
| 02/03/2019 | 18:20:00 | 12 | 10.8 | 0.61 | 31 | ||
| 02/03/2019 | 18:20:00 | 12 | 10.56 | 0.29 | 32 | ||
| 02/03/2019 | 18:20:00 | 12 | 10.18 | 0.01 | 33 | ||
| 02/03/2019 | 18:20:00 | 12 | 10.7 | 0.36 | 34 | ||
| 02/03/2019 | 18:20:00 | 13 | 10.79 | 0.34 | 35 | ||
| 02/03/2019 | 18:20:00 | 13 | 14.71 | 2.07 | 99 | ||
| 02/03/2019 | 18:20:00 | 13 | 15.78 | 2.3 | 100 | 0 | 2 |
| 02/03/2019 | 18:20:00 | 13 | 16.39 | 2.14 | 101 | 1 | 2 |
| 02/03/2019 | 18:20:00 | 13 | 17.58 | 1.91 | 102 | 2 | 2 |
| 02/03/2019 | 18:20:00 | 13 | 17.58 | 1.91 | 103 | 3 | 2 |
| 02/03/2019 | 18:20:00 | 13 | 17.86 | 1.62 | 104 | 4 | 2 |
| 02/03/2019 | 18:20:00 | 13 | 18.56 | 1.7 | 105 | 5 | 2 |
| 02/03/2019 | 18:20:00 | 14 | 19.14 | 1.68 | 106 | 6 | 2 |
| 02/03/2019 | 18:20:00 | 14 | 19.45 | 1.48 | 107 | 7 | 2 |
| 02/03/2019 | 18:20:00 | 14 | 20.08 | 1.54 | 108 | 8 | 2 |
| 02/03/2019 | 18:20:00 | 14 | 20.19 | 1.24 | 109 | 9 | 2 |
| 02/03/2019 | 18:20:00 | 14 | 20.77 | 1.33 | 110 | 10 | 2 |
| 02/03/2019 | 18:20:00 | 14 | 21.07 | 1.2 | 111 | 11 | 2 |
Solved! Go to Solution.
Hi @BekahLoSurdo ,
that's a good solution to the problem.
However, if you apply it to large data, it could could become slow. This will probably run much faster and is also a bit shorter:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("lZZdlsIwCEb30mdPhJC/uhWP+9/GNHxEnZ4M6bxYq70mcIH4fG4U7yT3SLxvt43bg/cH0fFW+q2EzMeVAulle93+BfAVoCYFuB6X6AL9QU6BigKS9TeWgIQi2FLVjy4ADVuKx2u+8vze8Hy/LVdCEISQegj1CtCiAqUDbQ3kwFih9tD3K4DsANSbbzqDSFgi9+DZVz0jfNdGNFRTqZroNVECpS/Ct21ERORFdzX3HemLqAGbarrE3LcDzH2fAKvxEvXbK0Qyf7rG3PiJKF/G49z4IIoRFf76ruLc+JlA3Wq9x7nxE5Gt93RXc+MnoqK9VWOcGz8RCWuQxuEbH7tCHNRrJPrKxxp5e8/B6Dtn9Nzo2azJ8qVzNASriEbiWx+Ilbv0HIivfSAYb6VvU3ztg8jl40R87+9Q2idh4osfSP0Ul/jiWYz4Dt43DyRZf+D02BcpFkytqmUSsOYixYIhpOM3BtZpTIskixWlTqywo3wWWZ4yizQb07SkbRjRlUQ3FACrIaZFojXm3WK3OUx+kw1Gu4xDArNos6SDDk3MGDRMiz4zBp/jhGdaVIExGMVBNCN/HKm/GUaVsf7z4H6mvn4A", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Date = _t, Time = _t, Seconds = _t, Velocity = _t, Acceleration = _t, Index = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Date", type date}, {"Time", type time}, {"Seconds", Int64.Type}, {"Velocity", type number}, {"Acceleration", type number}, {"Index", Int64.Type}}),
#"Added Custom" = Table.AddColumn(#"Changed Type", "GreaterThan15", each [Velocity] > 15),
#"Grouped Rows" = Table.Group(#"Added Custom", {"GreaterThan15"}, {{"All", each if List.First(_[GreaterThan15]) = true then Table.AddIndexColumn(_, "SprintTime",0,1) else _}}, GroupKind.Local),
#"Added Index" = Table.AddIndexColumn(#"Grouped Rows", "Index1", 0, 1),
#"Expanded All" = Table.ExpandTableColumn(#"Added Index", "All", {"Date", "Time", "Seconds", "Velocity", "Acceleration", "Index", "SprintTime"}, {"Date", "Time", "Seconds", "Velocity", "Acceleration", "Index", "SprintTime"}),
AddSprintID = Table.AddColumn(#"Expanded All", "SprintID", each if [GreaterThan15] then Number.RoundUp([Index1]/2) else null),
#"Removed Columns" = Table.RemoveColumns(AddSprintID,{"GreaterThan15", "Index1"})
in
#"Removed Columns"
The key lies in the 4th parameter of the Table.Group-function, where you use GroupKind.Local. See this article on how it works: https://blog.crossjoin.co.uk/2014/01/03/aggregating-by-local-groups-in-power-query/
Imke Feldmann (The BIccountant)
If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!
How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries
@Anonymous i dont feel like you have explained this clearly, you already have a sprint id, what sprint id are you trying to calcualted.
sprint time? are you wanting to sum all the time up until velocity < 15
also in your calculation you have helper? Data[Helper]=1
where is that in this data?
Proud to be a Super User!
Hi sorry i was unclear, the table i attached was my dasire outcome what i have at the moment is
Date Time Seconds Velocity Accelartion
| Date | Time | Seconds | Velocity | Acceleration |
| 02/03/2019 | 18:19:00 | 3 | 13.51 | 0.01 |
| 02/03/2019 | 18:19:00 | 3 | 13.51 | 0.01 |
| 02/03/2019 | 18:19:00 | 3 | 13.74 | 0.17 |
| 02/03/2019 | 18:19:00 | 4 | 14.06 | 0.35 |
| 02/03/2019 | 18:19:00 | 4 | 13.63 | 0.07 |
| 02/03/2019 | 18:19:00 | 4 | 13.84 | 0.2 |
| 02/03/2019 | 18:19:00 | 4 | 13.98 | 0.24 |
| 02/03/2019 | 18:19:00 | 4 | 14.36 | 0.45 |
| 02/03/2019 | 18:19:00 | 4 | 14.82 | 0.65 |
| 02/03/2019 | 18:19:00 | 4 | 15.16 | 0.73 |
| 02/03/2019 | 18:19:00 | 4 | 15.39 | 0.71 |
| 02/03/2019 | 18:19:00 | 5 | 15.46 | 0.58 |
| 02/03/2019 | 18:19:00 | 5 | 15.46 | 0.58 |
| 02/03/2019 | 18:19:00 | 5 | 15.81 | 0.67 |
| 02/03/2019 | 18:19:00 | 5 | 16.04 | 0.67 |
| 02/03/2019 | 18:19:00 | 5 | 16.29 | 0.68 |
| 02/03/2019 | 18:20:00 | 5 | 17.1 | 0.87 |
| 02/03/2019 | 18:20:00 | 5 | 17.1 | 0.87 |
| 02/03/2019 | 18:20:00 | 5 | 17.06 | 0.62 |
| 02/03/2019 | 18:20:00 | 5 | 17.49 | 0.77 |
| 02/03/2019 | 18:20:00 | 5 | 17.69 | 0.71 |
| 02/03/2019 | 18:20:00 | 6 | 17.67 | 0.52 |
| 02/03/2019 | 18:20:00 | 6 | 17.62 | 0.36 |
| 02/03/2019 | 18:20:00 | 6 | 17.54 | 0.21 |
| 02/03/2019 | 18:20:00 | 6 | 17.73 | 0.29 |
| 02/03/2019 | 18:20:00 | 6 | 17.44 | 0.02 |
| 02/03/2019 | 18:20:00 | 6 | 17.52 | 0.08 |
| 02/03/2019 | 18:20:00 | 6 | 17.45 | 0.01 |
| 02/03/2019 | 18:20:00 | 11 | 10.16 | 0.56 |
| 02/03/2019 | 18:20:00 | 12 | 10.15 | 0.32 |
| 02/03/2019 | 18:20:00 | 12 | 10.29 | 0.34 |
| 02/03/2019 | 18:20:00 | 12 | 10.8 | 0.61 |
Hi @Anonymous,
I would do this in Power Query. Using your starting data in an Excel workbook, here is my code:
let
Source = Excel.Workbook(File.Contents("C:\Users\rlosurdo\Desktop\SprintTime.xlsx"), null, true),
Original_Sheet = Source{[Item="Original",Kind="Sheet"]}[Data],
#"Promoted Headers" = Table.PromoteHeaders(Original_Sheet, [PromoteAllScalars=true]),
#"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"Date", type date}, {"Time", type datetime}, {"Seconds", Int64.Type}, {"Velocity", type number}, {"Acceleration", type number}}),
#"Added Index" = Table.AddIndexColumn(#"Changed Type", "Index", 0, 1),
Velocities = Table.FromColumns(
Table.ToColumns(#"Added Index") &
Table.ToColumns(
Table.InsertRows(Table.SelectColumns(#"Added Index","Velocity"),0,{[Velocity = null]})) &
Table.ToColumns(
Table.RemoveFirstN(Table.SelectColumns(#"Added Index","Velocity"),1)),
Table.ColumnNames(#"Added Index") & {"PreviousVelocity", "NextVelocity"}),
#"Removed Bottom Rows" = Table.RemoveLastN(Velocities,1),
#"Added SprintAudit" = Table.AddColumn(#"Removed Bottom Rows", "SprintAudit", each if [Velocity] >= 15 then if [PreviousVelocity] < 15 then "Initial" else if [NextVelocity] < 15 or [NextVelocity] is null then "Completion" else "Continued" else null),
#"Added Join Key 1" = Table.AddColumn(#"Added SprintAudit", "Join Key", each if [SprintAudit] <> null then 1 else 0),
InitialIndexTable = Table.SelectRows(#"Added Join Key 1", each ([SprintAudit] = "Initial")),
#"Added Initial Index" = Table.AddIndexColumn(InitialIndexTable, "Sprint Index", 1, 1),
CompletionIndexTable = Table.SelectRows(#"Added Join Key 1", each ([SprintAudit] = "Completion")),
#"Added Completion Index" = Table.AddIndexColumn(CompletionIndexTable, "Sprint Index", 1, 1),
IndexTable = Table.Combine({#"Added Completion Index", #"Added Initial Index"}),
#"Removed Other Columns" = Table.SelectColumns(IndexTable,{"SprintAudit", "Sprint Index", "Index"}),
#"Pivoted Column" = Table.Pivot(#"Removed Other Columns", List.Distinct(#"Removed Other Columns"[SprintAudit]), "SprintAudit", "Index", List.Sum),
#"Added Join Key" = Table.AddColumn(#"Pivoted Column", "Join Key", each 1),
#"Merged Queries" = Table.NestedJoin(#"Added Join Key 1",{"Join Key"},#"Added Join Key",{"Join Key"},"IndexTable",JoinKind.LeftOuter),
#"Expanded IndexTable" = Table.ExpandTableColumn(#"Merged Queries", "IndexTable", {"Sprint Index", "Completion", "Initial"}, {"Sprint Index", "Completion", "Initial"}),
#"Added Sprint ID" = Table.AddColumn(#"Expanded IndexTable", "Sprint ID", each try if [Index] >= [Initial] and [Index] <= [Completion] then [Sprint Index] else 0 otherwise null),
#"Filtered Rows" = Table.SelectRows(#"Added Sprint ID", each ([Sprint ID] <> 0)),
Partition = Table.Group(#"Filtered Rows", {"Sprint ID"}, {{"Partition", each Table.AddIndexColumn(_, "Sprint Time Setup",0,1), type table}}),
#"Expanded Partition" = Table.ExpandTableColumn(Partition, "Partition", {"Date", "Time", "Seconds", "Velocity", "Acceleration", "Index", "Sprint Time Setup"}, {"Date", "Time", "Seconds", "Velocity", "Acceleration", "Index", "Sprint Time Setup"}),
#"Added Sprint Time" = Table.AddColumn(#"Expanded Partition", "Sprint Time", each if [Sprint ID] <> null then [Sprint Time Setup] else null),
#"Sorted Rows" = Table.Sort(#"Added Sprint Time",{{"Index", Order.Ascending}}),
#"Removed Columns" = Table.RemoveColumns(#"Sorted Rows",{"Sprint Time Setup"}),
#"Reordered Columns" = Table.ReorderColumns(#"Removed Columns",{"Date", "Time", "Seconds", "Velocity", "Acceleration", "Index", "Sprint Time", "Sprint ID"})
in
#"Reordered Columns"Here is the resulting table in Power BI:
I basically created a few temporary tables so that I could define and index each sprint as its own entity. Then I used @ImkeF's partition magic (https://community.powerbi.com/t5/Desktop/Custom-column-Index-or-Ranking-by-other-column/td-p/33864) to add an index for each individual Sprint ID.
Let me know if you have any questions or would like me to send over my workbook.
Hi @BekahLoSurdo ,
that's a good solution to the problem.
However, if you apply it to large data, it could could become slow. This will probably run much faster and is also a bit shorter:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("lZZdlsIwCEb30mdPhJC/uhWP+9/GNHxEnZ4M6bxYq70mcIH4fG4U7yT3SLxvt43bg/cH0fFW+q2EzMeVAulle93+BfAVoCYFuB6X6AL9QU6BigKS9TeWgIQi2FLVjy4ADVuKx2u+8vze8Hy/LVdCEISQegj1CtCiAqUDbQ3kwFih9tD3K4DsANSbbzqDSFgi9+DZVz0jfNdGNFRTqZroNVECpS/Ct21ERORFdzX3HemLqAGbarrE3LcDzH2fAKvxEvXbK0Qyf7rG3PiJKF/G49z4IIoRFf76ruLc+JlA3Wq9x7nxE5Gt93RXc+MnoqK9VWOcGz8RCWuQxuEbH7tCHNRrJPrKxxp5e8/B6Dtn9Nzo2azJ8qVzNASriEbiWx+Ilbv0HIivfSAYb6VvU3ztg8jl40R87+9Q2idh4osfSP0Ul/jiWYz4Dt43DyRZf+D02BcpFkytqmUSsOYixYIhpOM3BtZpTIskixWlTqywo3wWWZ4yizQb07SkbRjRlUQ3FACrIaZFojXm3WK3OUx+kw1Gu4xDArNos6SDDk3MGDRMiz4zBp/jhGdaVIExGMVBNCN/HKm/GUaVsf7z4H6mvn4A", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Date = _t, Time = _t, Seconds = _t, Velocity = _t, Acceleration = _t, Index = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Date", type date}, {"Time", type time}, {"Seconds", Int64.Type}, {"Velocity", type number}, {"Acceleration", type number}, {"Index", Int64.Type}}),
#"Added Custom" = Table.AddColumn(#"Changed Type", "GreaterThan15", each [Velocity] > 15),
#"Grouped Rows" = Table.Group(#"Added Custom", {"GreaterThan15"}, {{"All", each if List.First(_[GreaterThan15]) = true then Table.AddIndexColumn(_, "SprintTime",0,1) else _}}, GroupKind.Local),
#"Added Index" = Table.AddIndexColumn(#"Grouped Rows", "Index1", 0, 1),
#"Expanded All" = Table.ExpandTableColumn(#"Added Index", "All", {"Date", "Time", "Seconds", "Velocity", "Acceleration", "Index", "SprintTime"}, {"Date", "Time", "Seconds", "Velocity", "Acceleration", "Index", "SprintTime"}),
AddSprintID = Table.AddColumn(#"Expanded All", "SprintID", each if [GreaterThan15] then Number.RoundUp([Index1]/2) else null),
#"Removed Columns" = Table.RemoveColumns(AddSprintID,{"GreaterThan15", "Index1"})
in
#"Removed Columns"
The key lies in the 4th parameter of the Table.Group-function, where you use GroupKind.Local. See this article on how it works: https://blog.crossjoin.co.uk/2014/01/03/aggregating-by-local-groups-in-power-query/
Imke Feldmann (The BIccountant)
If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!
How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries
This is great @ImkeF. I did not know about GroupKind.Local and can see how it is the perfect thing to use here and also how it could be useful elsewhere - I appreciate the addition to my M toolkit!
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 37 | |
| 37 | |
| 33 | |
| 32 | |
| 29 |
| User | Count |
|---|---|
| 130 | |
| 88 | |
| 82 | |
| 68 | |
| 64 |