Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hello,
I'm trying to optimize this surely wrong function, the calculation is heavy because of datetime fields (whith seconds precision) and because each of the two source tables has approx 1M rows:
FILTER(
GENERATE(
SELECTCOLUMNS(Stops,"aStart",Stops[Start],"aEnd",Stops[End],"aID",Stops[ID],"aMac",Stops[IDDevice])
,
SELECTCOLUMNS(Cycle,"bStart",Cycle[Start],"bEnd",Cycle[End],"bID",Cycle[ID])
)
,OR(AND([aStart]>=[bStart],[aStart]<=[bEnd]),AND([aEnd]>=[bStart],[aEnd]<=[bEnd]))
)
Stops table and Cycle table are yet related by IDDevice.
The overall target is to have a table of which malfunctionings in Stops table are affecting which cycle IDs.
I also tried doing that in M as an alternative:
Buffered=Table.Buffer(Cycle)
#"Added Custom" =
Table.AddColumn(Stops, "Custom",
(S)=> Table.SelectRows(Buffered,
(P)=> P[IDDevice]= S[IDDevice] and ((S[Start]>= P[Start] and S[Start]<=P[End])or(S[End]>= P[Start] and S[End]<=P[End]))
)
)
The problem is that both in DAX and M it's very slow, approx 1 row per second calculation and Table.Buffer seems not speeding up the query.
Can you please help me?
Thanks in advance
The general guidance is to try and reduce cardinality as soon as possible.This might be a option in your scenario. Please provide some sample data (ideally a lot but whatever you can do) and please explain the comparison logic again.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the October 2025 Power BI update to learn about new features.