Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Problem Overview : I have run data for jobs which run daily. Each run has steps which are incremental and each step has duration logged against it. There are historical runs for each run which are mapped to a ParentAppId. The errors against each run is logged too. The historical runs mapped to a ParentAppId are mapped to a run and are used for calculating the duration variance for a run. I need to get the top 10 issues across all these tables considering any duration variance along with the errors and list out them. The top 10 issues should be looked by considering if there are more errors versus the variance.
For Ex:
1. If an error E2 has a count of 10 for a run and there is no variance then E2 is the top variance.
2. If there is a duration variation of -30 say for StepID 1 and only 1 error E1 with count of 1 , then StepID 1 with variance of -30 is the top variance.
The different tables per below
1. RunData
| RunID | StepID | Duration |
| 1 | 1 | 100 |
| 1 | 2 | 200 |
| 2 | 1 | 120 |
| 2 | 2 | 300 |
| 3 | 1 | 50 |
| 3 | 2 | 100 |
2. ParentAppRunMapper
| RunID | ParentAppID |
| 1 | A |
| 2 | B |
| 3 | A |
3. Error
| RunID | Error | Count |
| 1 | E1 | 10 |
| 1 | E2 | 1 |
| 2 | E3 | 2 |
| 3 | E4 | 3 |
4. HistoricalRun
| ParentAppID | StepID | Duration |
| A | 1 | 120 |
| A | 2 | 180 |
| B | 1 | 150 |
| B | 2 | 200 |
5. StepDetails
| StepID | StepName |
| 1 | S1 |
| 2 | S2 |
6. TopIssues
| IssueID | RunID | VarianceName | Variance | IssueType |
| 1 | 1 | E1 | 10 | Error |
| 2 | 1 | S1 | -20 | Variance |
| 3 | 2 | S2 | -30 | Variance |
| 4 | 2 | E2 | 3 | Error |
| 5 | 3 | E3 | 3 | Variance |
The first 5 tables will be used to derive the 6th table (TopIssues). Once the calculation is done using the DAX the output will be per the 6th table. The following calculations should be done :
- Duration variance between a run (RunData table) & its historical run data (HistoricalRun table)
- List out the top 10 issues (across the duration variance per above & error data for a run) to a new table (TopIssues table) for all runs .
I will use this to be listed out in the Smart Narrative visual per below:
Top issues for this run (RunID : 1) are per below:
- E1 error has occured 10 times
- S1 has a variance of 20 seconds more than the historical run
Question :
1. How can i write a DAX measure to calculate the duration variance across multiple tables
2. How can i write a DAX measure to arrive at the top 10 issues across the duration variance & errors by considering the weightage of the errors versus the duration variance.
Please find here the pbix file i have used.
Solved! Go to Solution.
Hi, @sanal5677
According to your description and Follow-up, I can roughly understand your requirement. Then I do some operations in Power BI to get a table similar to your expected table, you can take a look at my steps:
‘Merge1’ is the table I got:
This is the M query in the advanced editor:
let
Source = Table.NestedJoin(RunData, {"StepID"}, StepDetails, {"StepID"}, "StepDetails", JoinKind.LeftOuter),
#"Expanded StepDetails" = Table.ExpandTableColumn(Source, "StepDetails", {"StepName"}, {"StepDetails.StepName"}),
#"Merged Queries" = Table.NestedJoin(#"Expanded StepDetails", {"RunID"}, ParentAppRunMapper, {"RunID"}, "ParentAppRunMapper", JoinKind.LeftOuter),
#"Expanded ParentAppRunMapper" = Table.ExpandTableColumn(#"Merged Queries", "ParentAppRunMapper", {"ParentAppID"}, {"ParentAppRunMapper.ParentAppID"}),
#"Merged Queries1" = Table.NestedJoin(#"Expanded ParentAppRunMapper", {"ParentAppRunMapper.ParentAppID", "StepID"}, HistoricalRun, {"ParentAppID", "StepID"}, "HistoricalRun", JoinKind.LeftOuter),
#"Expanded HistoricalRun" = Table.ExpandTableColumn(#"Merged Queries1", "HistoricalRun", {"Duration"}, {"HistoricalRun.Duration"})
in
#"Expanded HistoricalRun"
Table =
var _Variance=
SELECTCOLUMNS('Merge1',"RunID",[RunID],"VarianceName",[StepDetails.StepName],"Variance",[Variance],"Issue Type","Variance")
var _Error=
SELECTCOLUMNS('Error',"RunID",[RunID],"VarianceName",[Error],"Variance",[Count],"Issue Type","Error")
return
UNION(_Variance,_Error)
weightage =
IF(
[Issue Type]="Variance"&&[Variance]>0,BLANK(),
IF([Issue Type]="Error",[Variance]*10,ABS([Variance])))
Issue ID = RANKX('Table',[weightage],,DESC,Dense)
And I guess this can be what you want.
You can download my test pbix file here
Best Regards,
Community Support Team _Robert Qin
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi, @sanal5677
According to your description and Follow-up, I can roughly understand your requirement. Then I do some operations in Power BI to get a table similar to your expected table, you can take a look at my steps:
‘Merge1’ is the table I got:
This is the M query in the advanced editor:
let
Source = Table.NestedJoin(RunData, {"StepID"}, StepDetails, {"StepID"}, "StepDetails", JoinKind.LeftOuter),
#"Expanded StepDetails" = Table.ExpandTableColumn(Source, "StepDetails", {"StepName"}, {"StepDetails.StepName"}),
#"Merged Queries" = Table.NestedJoin(#"Expanded StepDetails", {"RunID"}, ParentAppRunMapper, {"RunID"}, "ParentAppRunMapper", JoinKind.LeftOuter),
#"Expanded ParentAppRunMapper" = Table.ExpandTableColumn(#"Merged Queries", "ParentAppRunMapper", {"ParentAppID"}, {"ParentAppRunMapper.ParentAppID"}),
#"Merged Queries1" = Table.NestedJoin(#"Expanded ParentAppRunMapper", {"ParentAppRunMapper.ParentAppID", "StepID"}, HistoricalRun, {"ParentAppID", "StepID"}, "HistoricalRun", JoinKind.LeftOuter),
#"Expanded HistoricalRun" = Table.ExpandTableColumn(#"Merged Queries1", "HistoricalRun", {"Duration"}, {"HistoricalRun.Duration"})
in
#"Expanded HistoricalRun"
Table =
var _Variance=
SELECTCOLUMNS('Merge1',"RunID",[RunID],"VarianceName",[StepDetails.StepName],"Variance",[Variance],"Issue Type","Variance")
var _Error=
SELECTCOLUMNS('Error',"RunID",[RunID],"VarianceName",[Error],"Variance",[Count],"Issue Type","Error")
return
UNION(_Variance,_Error)
weightage =
IF(
[Issue Type]="Variance"&&[Variance]>0,BLANK(),
IF([Issue Type]="Error",[Variance]*10,ABS([Variance])))
Issue ID = RANKX('Table',[weightage],,DESC,Dense)
And I guess this can be what you want.
You can download my test pbix file here
Best Regards,
Community Support Team _Robert Qin
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi, @sanal5677
According to your description and sample data, I read for a long time but I still can not figure out the logic within the columns in the expected table:
As a result, would you like to explain the logic of the value in your expected table one by one so that I can finally understand and help you to achieve your requriement?
Thank you very much!
How to Get Your Question Answered Quickly
Best Regards,
Community Support Team _Robert Qin
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
The TopIssues table gives the summary of the variances between past and current runs along with errors any for that run. For calculating variance the run data (Rundata table) is compared against the historicalrun table and for the errors we use the error table for a given runid. There are two types of issues which we can expect - variance and error. So the 6th table (top issues table) will be a calculated table which provides the summary of those variances and errors by grouping them based on a weightage across the variance and error data. Like i gave an example above saying
- E1 error has occured 10 times
- S1 has a variance of 20 seconds more than the historical run
Then E1 will have more weightage as it has occurred more than 10 times ( threshold ). So the calculation needs to take the weightage and the threshold while arriving at the top 10 issues summary for a given runid by running across the variance and error data.
Check out the November 2025 Power BI update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!