The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
The output of a calculated measure (see Figure # 3 below) is not changing in response slicer selections in a visualization of a related (primary or central) table (see Figure #’s 4, 5, 6 below).
However, a different instantiation of the same measure using identical syntax in the visualization sheet (see Figure #4 below) and, presumably, a different context is changing in response to slicer selections (see Figure #’s 5, 6).
So clearly the context of the [PerfMin#] calculation in the related ‘Perf Rating Scale’ lookup table (see Figure #3) is not correct.
I’ve tried numerous options with both syntax (referencing or not referencing table and/or column, etc) and DAX functions (RELATEDTABLE, ALLEXCEPT, et al) without success.
Since we have different rating distribution targets for each performance rating level, the objective of the [PerfMin#] and [PerfMax#] measure is to calculate specific target range #’s based on the resultant count of employees (rows) for any given slicer selection.
So the final or desired [PerfMin#] measure would be as follows:
PerfMin# = CALCULATE( COUNTROWS('Calibration Detail Data'), ALLSELECTED('Calibration Detail Data') ) * ('Perf Rating Scale'[PerfMin%]/100)
[Note that the right operand, “… ('Perf Rating Scale'[PerfMin%]/100) …”, was dropped in the examples below to better highlight that the left operand result was not changing in response to slicer selections.]
The correct results for the selections, or lack of selections, shown in Figure #’s 4, 5, 6 are captured in Figure #’s 7, 8, 9 (see below).
The ultimate objective is to plot [PerfMin#] and [PerfMax#] on the Performance Rating Distribution graph to how the actual ratings compare to the target ratings.
Any suggestions on how to solve this problem?
Thanks,
Figure #1: Table relationships (‘Calibration Detail Data’ is central table with other tables primarily serving as lookup tables)
Figure #2: Data relationship between ‘Perf Rating Scale’ and ‘Calibration Detail Data’ Tables (‘Perf Rating Scale’[Perf Rating] has a 1 to many relationship with ‘Calibration Detail Data’[Performance Rating])
Figure #3: Calculated measure in ‘Perf Rating Scale’ Table, which is used to generate the ‘Perf Rating Scale’[PerfMin#] calculated column
Figure #4: 2 data visualizations using the same calculated measure: one using [PerfMin#] from ‘Perf Rating Scale’ table and the other using [Rows] calculated directly on visualization (and, strictly speaking, a 3rd measure [Selected Employees] producing the same intended output by counting the number of distinct names in the ‘Calibration Detail Data’[Name] column)
Figure #5: Output of [PerfMin#] measure DOES NOT change but output of [Rows] (and [Selected Employees]) measure DOES change in response to selecting “Strong Performer” on Performance Rating Distribution graph
Figure #6: Output of [PerfMin#] measure DOES NOT change but output of [Rows] (and [Selected Employees]) measure DOES change in response to selecting “Engineer” in the Job Level slicer (filter)
Figure #7: Correct result for visualization in Figure #4
Figure #8: Correct result for visualization in Figure #5
Figure #9: Correct result for visualization in Figure #6
Solved! Go to Solution.
Thanks for reply Cherie
Thought about this approach but wasn't sure it would use the correct percentages for each different performance rating.
After referencing the correct table, the calculated measure seems to generate the correct results!
Here's the updated expression:
PerfMin# =
VAR a =
CALCULATE (
COUNTROWS ( 'Calibration Detail Data' ),
ALLSELECTED ( 'Calibration Detail Data' )
)
RETURN
CALCULATE ( a * MAX ( 'Perf Rating Scale'[PerfMin%] ) / 100 )
Thanks for your help,
Hi @raimund
You may try to create a measure instead of calculated column for PerfMin# as below:
PerfMin# = VAR a = CALCULATE ( COUNTROWS ( 'Calibration Detail Data' ), ALLSELECTED ( 'Calibration Detail Data' ) ) RETURN CALCULATE ( a * MAX ( 'Calibration Detail Data'[PerMin%] ) / 100 )
Regards,
Cherie
Thanks for reply Cherie
Thought about this approach but wasn't sure it would use the correct percentages for each different performance rating.
After referencing the correct table, the calculated measure seems to generate the correct results!
Here's the updated expression:
PerfMin# =
VAR a =
CALCULATE (
COUNTROWS ( 'Calibration Detail Data' ),
ALLSELECTED ( 'Calibration Detail Data' )
)
RETURN
CALCULATE ( a * MAX ( 'Perf Rating Scale'[PerfMin%] ) / 100 )
Thanks for your help,