March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
I have a table in power bi with the list of problems that cause downtime in specific machines. The structure of the table is as follows:
machine_id | start_timestamp | downtime (sec.) |
1 | 10/10/2021 7:03:00 AM | 100 |
1 | 10/10/2021 7:04:00 AM | 30 |
1 | 10/10/2021 7:06:00 AM | 300 |
2 | 10/10/2021 7:08:00 AM | 20 |
What I want to calculate as a 'measure' is, for each specific machine and for a certain period of time, the total downtime without overlapping values. That means, if there is already a problem that caused a downtime in a certain time, the existence of a new problem that overlaps that one shouldn't be added in the calculation (or eventually added partially).
As an example, for machine_id = 1, between 10/10/2021 7:00:00AM and 10/10/2021 7:10:00AM I would obtain a total downtime of 100 + 240 = 340 seconds.
Can you please help me? I have tried many approaches without success. If I need to clarify anything else just say.
Thanks!
Solved! Go to Solution.
@GoingDigital123 OK, I adjusted Overlap a little and solved this. PBIX is attached below signature. Basically, added the following column to the table:
end_timestamp = [start_timestamp] + [downtime (sec.)]*1/24/60/60
Then this measure:
Overlap =
VAR __Start = DATEVALUE("10/10/2021") + 7/24
VAR __End = DATEVALUE("10/10/2021") + 7/24 + 10/24/60
VAR __Table = GENERATESERIES(__Start,__End,1/24/60/60)
VAR __Table1 = ALL('Table')
VAR __Table2 = GENERATE(__Table,__Table1)
VAR __Table3 = ADDCOLUMNS(__Table2,"Include",IF([Value]>=[start_timestamp] && [Value] <= [end_timestamp],1,0))
VAR __Table4 = GROUPBY(__Table3,[Value],"Second",MAXX(CURRENTGROUP(),[Include]))
RETURN
SUMX(__Table4,[Second])
@GoingDigital123 OK, I adjusted Overlap a little and solved this. PBIX is attached below signature. Basically, added the following column to the table:
end_timestamp = [start_timestamp] + [downtime (sec.)]*1/24/60/60
Then this measure:
Overlap =
VAR __Start = DATEVALUE("10/10/2021") + 7/24
VAR __End = DATEVALUE("10/10/2021") + 7/24 + 10/24/60
VAR __Table = GENERATESERIES(__Start,__End,1/24/60/60)
VAR __Table1 = ALL('Table')
VAR __Table2 = GENERATE(__Table,__Table1)
VAR __Table3 = ADDCOLUMNS(__Table2,"Include",IF([Value]>=[start_timestamp] && [Value] <= [end_timestamp],1,0))
VAR __Table4 = GROUPBY(__Table3,[Value],"Second",MAXX(CURRENTGROUP(),[Include]))
RETURN
SUMX(__Table4,[Second])
Hi, @Greg_Deckler.
I was wondering if you could help me with this problem.
As preconditions, the data must be group by or filter by number, client, client location.
I need to take the difference between the highest value of end date and the lowest value of start date, of a continuous date time, that is, not to add several times if there is overlap, but I have a problem when I count time that is not actually continuous, I present the error in images to see if it is more understandable.
Duration is what I extract from your example, LAMS is a test of mine that isn´t working either.
Thanks in advance
Thanks @Greg_Deckler ! It worked quite well!
The only problem I see is when I have many rows in the real dataset. In reality, I have several machines and the period covers several days. In that case, I believe it takes a lot of time to calculate the measure. I am quite new to Power BI but I was wondering if using python scripts might help with this in some way to facilitate the calculations?
@GoingDigital123 Yeah, it is not a particularly fast measure on large datasets there is no doubt about that. I tried a bunch of different methods when I was constructing that measure and finally gave up and took the brute force approach. One of the main issues in your case comes from the granularity, seconds versus minutes or hours. You would likely get an order of magnitude performance increase if you worked in minutes versus hours. So basically in Table drop the last /60 and then at the end multiply by 60 to get back to seconds. Might be slighly less accurate but would be significantly faster.
@GoingDigital123 These should help:
Overlap - Microsoft Power BI Community
See my article on Mean Time Between Failure (MTBF) which uses EARLIER: http://community.powerbi.com/t5/Community-Blog/Mean-Time-Between-Failure-MTBF-and-Power-BI/ba-p/3395....
The basic pattern is:
Column =
VAR __Current = [Value]
VAR __PreviousDate = MAXX(FILTER('Table','Table'[Date] < EARLIER('Table'[Date])),[Date])
VAR __Previous = MAXX(FILTER('Table',[Date]=__PreviousDate),[Value])
RETURN
__Current - __Previous
Overall, it is going to be a messy calculation where you have to add an endtime column to each row and then exclude rows that fall within the start and end time of another row essentially. And that's the easy calculation comparatively as opposed to partial overlap. Might work on this some more since it is interesting.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
134 | |
91 | |
90 | |
64 | |
58 |
User | Count |
---|---|
201 | |
137 | |
107 | |
72 | |
68 |