Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi,
I am looking for a solution to find a nearest datetime in one table after a specific datetime stamp in another.
So I have two tables:
Table1
ID | Start | End |
IX34572 | 2024-11-05 12:37:48 | 2024-11-05 13:47:23 |
IX34527 | 2024-11-06 13:07:34 | |
IF9286 | 2024-10-28 14:36:00 |
|
IF9286 | 2024-11-03 00:17:38 | 2024-11-03 04:13:14 |
Table2
ID | End |
IX34572 | 2024-10-07 14:38:15 |
IX34572 | 2024-11-06 16:15:00 |
IX34572 | 2024-11-06 14:01:15 |
IF9286 | 2024-10-28 14:37:00 |
IF9286 | 2024-10-28 17:37:15 |
And what i need to do is to ignore every cell in Table1 End column if it is filled and for empty cells i need to find and paste to Table1 the nearest timestamp from Table2 after the start date from Table1. So for IX3472 it would be 2024-11-06 14:01:15 and for IF9286 2024-10-28 14:37:00.
In excel i've done ths by complex IF combinations.
Is it even possible in PowerBI?
Appreciate every help.
k
Solved! Go to Solution.
Given your data, the following should work:
let
//Note references to Table_1 and Table_2
//You can either put them in sepearate queries, or set them directly in this query.
Source = Table_1,
#"New End" = Table.ReplaceValue(
Source,
each [Start],
each [ID],
(x,y,z)as nullable datetime=>if x <> null then x
else List.Min(List.Select(Table.SelectRows(Table_2, each [ID]=z)[End], (li)=>li>y)),
{"End"})
in
#"New End"
Hi @kkarol ,
Thanks for all the replies!
Please check these replies to see if there is a solution you are looking for. If a user's reply can help you, please don't forget to accept its reply as solution so that other users can find the solution faster, thank you!
Best Regards,
Dino Tao
Hi @kkarol
Just copy the below code and past it into the advance editor.
let
Table1 = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("bc3bCQMxDETRVoy+16CXLWcKWEgHAeP+24hCwrKY/F6ORnPS82XeQukgZfUqUrkVUVjAx1YNHlCjdfzuNO6ifwQHzLOWrzofOvqFuOoo4rAO5qz/TA5ZYYbk0NiqIz+I01pv", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Start = _t, End = _t]),
#"Changed Type" = Table.TransformColumnTypes(Table1,{{"ID", type text}, {"Start", type datetime}, {"End", type datetime}}),
Table2 = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8owwNjE1N1LSUTIyMDLRNTTQNTBXMDSxMrawMjRVitXBVGGoa2CmYGgGlLYyMMCjwsTKwBBuhpulkYUZkiVGFmBLzOFGYFNgDlIAMiEWAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, End = _t]),
#"Changed Type1" = Table.TransformColumnTypes(Table2,{{"ID", type text}, {"End", type datetime}}),
Custom1 = Table.AddColumn(#"Changed Type", "End New", each _[End]??List.Min(Table.SelectRows(#"Changed Type1",(x)=>x[End]>=_[Start])[End]))
in
Custom1
Given your data, the following should work:
let
//Note references to Table_1 and Table_2
//You can either put them in sepearate queries, or set them directly in this query.
Source = Table_1,
#"New End" = Table.ReplaceValue(
Source,
each [Start],
each [ID],
(x,y,z)as nullable datetime=>if x <> null then x
else List.Min(List.Select(Table.SelectRows(Table_2, each [ID]=z)[End], (li)=>li>y)),
{"End"})
in
#"New End"
Thanks a lot for that, looks like it works! one strange thing though, i have soruce files like 1,5 mb and when it is refreshing i see this query is loading like 1X GBs into the query, when i am saving the pbix file is 1,5 mb
I'm not sure why that should be happening.
If it is because of the query, you might try buffering the tables to see if that makes any difference.
eg:
Source = Table.Buffer(Table_1),
Table2 = Table.Buffer(Table_2),
#"New End" = Table.ReplaceValue(
Source,
each [Start],
each [ID],
(x,y,z)as nullable datetime=>if x <> null then x
else List.Min(List.Select(Table.SelectRows(Table2, each [ID]=z)[End], (li)=>li>y)),
{"End"})
(Note that in the New End step we changed a table name from Table_2 to Table2)
that works. Thanks!
The data you provided and your example seem to be wrong. IX3472 does not exist
let
源 = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("bc3bCQMxDETRVoy+16CXLWcKWEgHAeP+24hCwrKY/F6ORnPS82XeQukgZfUqUrkVUVjAx1YNHlCjdfzuNO6ifwQHzLOWrzofOvqFuOoo4rAO5qz/TA5ZYYbk0NiqIz+I01pv", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Start = _t, End = _t]),
更改的类型 = Table.TransformColumnTypes(源,{{"ID", type text}, {"Start", type datetime}, {"End", type datetime}}),
已添加自定义 = Table.AddColumn(更改的类型, "自定义",
each [End] ?? List.Min(Table.SelectRows(Table2, (r) => [ID] = r[ID] and r[End] > [Start])[End])
)
in
已添加自定义
Hi @kkarol ,
Yes, it's possible to achieve this in Power BI by creating a DAX measure or using Power Query to find the nearest timestamp from `Table2` for each `Start` date in `Table1`, only for rows where `End` is empty.
Here’s a step-by-step approach using Power Query:
1. Load Tables:
- Load `Table1` and `Table2` into Power BI.
2. Add a Custom Column to Filter:
- Go to `Table1` in Power Query Editor.
- Add a custom column to filter `Table2` to only include rows where the `ID` matches and the `End` in `Table2` is after the `Start` date in `Table1`. Name this column `FilteredTable2`.
DAX
= Table.SelectRows(Table2, each [ID] = [ID] and [End] > [Start])
3. Extract Nearest DateTime:
- Now, for each row in `Table1` with an empty `End` column, find the nearest date from `FilteredTable2`.
- Add a conditional column to check if `End` is blank. If it is, use the `FilteredTable2` column to find the minimum date.
DAX
if [End] = null then List.Min(Table.SelectRows(Table2, each [ID] = [ID] and [End] > [Start])[End]) else [End]
4. Remove Extra Columns:
- Remove `FilteredTable2` and any other unnecessary columns to keep only the updated `End` column.
5. Close and Apply:
- Close Power Query Editor and apply changes to bring the transformed data into Power BI.
This approach uses Power Query’s row-level transformations to achieve the same logic you used in Excel with IF combinations, and Power Query handles the matching and filtering efficiently.
Please accept this as solution if it helps. Appreciate Kudos.
2. Add a Custom Column to Filter:
- Go to `Table1` in Power Query Editor.
- Add a custom column to filter `Table2` to only include rows where the `ID` matches and the `End` in `Table2` is after the `Start` date in `Table1`. Name this column `FilteredTable2`.DAX
= Table.SelectRows(Table2, each [ID] = [ID] and [End] > [Start])
When i am expanding this new column i get "No columns were found." I checked the data and there are definitely columns that match these conditions.