The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Looking for help with creating a measure(s) to support a visual. *Sample data at the bottom.
There are a couple different layers to get to the data needed. In the base data we have a primary key with multiple scores, one score per category, that need to be averaged at the primary key level. The scores are all whole number values from 1-5. It is possible and expected for there to be null scores included in the data. The average needs to be able to update when the score categories are filtered.
Along with the average there are 5 score buckets. The result of the average would determine which of 5 score buckets the primary key would fall into. The score buckets are:
Range Name | RangeMin | RangeMax | SortVal |
0-1 | 0 | 1 | 1 |
1-2 | 1 | 2 | 2 |
2-3 | 2 | 3 | 3 |
3-4 | 3 | 4 | 4 |
4-5 | 4 | 5 | 5 |
The visual would show the distinct count of primary keys per bucket. The counts would need to be able to update based on the average changing due to filtering by score category. There are some other categories that would be used as filters for this visual but those filters would only include/exclude primary keys and would change counts not scores.
End output would look like this, with the score ranges along the top, the values are the distinct counts of IDs falling in the ranges, and the row headers would be any related subcategory. I used segment since its on the sampe data.
Segment | 0-1 | 1-2 | 2-3 | 3-4 | 4-5 |
1 | 112 | 214 | 352 | 256 | 118 |
2 | 139 | 235 | 350 | 238 | 126 |
3 | 247 | 317 | 301 | 280 | 156 |
4 | 98 | 147 | 271 | 185 | 135 |
… |
Sample Data:
ID | Segment | Score_Category | Score_value | Region |
9 | 8 | 10024 | 3 | 9 |
9 | 8 | 10025 | 5 | 9 |
9 | 8 | 10026 | 3 | 9 |
9 | 8 | 10027 | 3 | 9 |
9 | 8 | 10029 | 4 | 9 |
9 | 8 | 10030 | 5 | 9 |
9 | 8 | 10031 | 2 | 9 |
9 | 8 | 10033 | 4 | 9 |
9 | 8 | 10036 | 4 | 9 |
9 | 8 | 10037 | 3 | 9 |
9 | 8 | 10038 | 3 | 9 |
12 | 5 | 10005 | 5 | 7 |
12 | 5 | 10006 | 5 | 7 |
12 | 5 | 10007 | 5 | 7 |
12 | 5 | 10008 | 5 | 7 |
12 | 5 | 10011 | 5 | 7 |
12 | 5 | 10012 | 5 | 7 |
12 | 5 | 10014 | 3 | 7 |
12 | 5 | 10015 | 5 | 7 |
12 | 5 | 10020 | 5 | 7 |
12 | 5 | 10022 | 5 | 7 |
12 | 5 | 10024 | 5 | 7 |
12 | 5 | 10028 | 5 | 7 |
12 | 5 | 10029 | 1 | 7 |
12 | 5 | 10035 | 3 | 7 |
12 | 5 | 10036 | 5 | 7 |
12 | 5 | 10037 | 3 | 7 |
12 | 5 | 10038 | 3 | 7 |
48 | 10 | 10001 | 1 | 6 |
48 | 10 | 10002 | 1 | 6 |
48 | 10 | 10003 | 1 | 6 |
48 | 10 | 10004 | 1 | 6 |
48 | 10 | 10005 | 1 | 6 |
Solved! Go to Solution.
Try this measure. It calculates the average score for each ID, and then counts the distinct ID within each bucket.
Average Distinct Count =
VAR vRangeMin =
MAX ( ScoreBucket[RangeMin] )
VAR vRangeMax =
MAX ( ScoreBucket[RangeMax] )
VAR vTable =
ADDCOLUMNS (
VALUES ( FactTable[ID] ),
"@Amount", CALCULATE ( AVERAGE ( FactTable[SCORE_VALUE] ) )
)
VAR vResult =
CALCULATE (
DISTINCTCOUNT ( FactTable[ID] ),
FILTER ( vTable, [@Amount] >= vRangeMin && [@Amount] <= vRangeMax )
)
RETURN
vResult
Proud to be a Super User!
@DataInsights
Thank you for your response. I tried the measure you provided and ran it though a validation. The measure did a good job of placing scores under the correct bucket but the counts are too high. It looks like the measure is counting every score record per ID under a bucket rather than averaging all scores for an ID into a single value then counting that under a bucket. Validation measures below, I am also linking a de-identified version of my full dataset with NULL scores removed for space.
A distinct count of my data shows there are 988 unique IDs. Here I’ve got the results from the measure compared to a distinct count of ID per score value. In desired state the sum of the counts from each bucket would match the total distinct count of IDs. Achieved by averaging all scores per provider, while still letting that rollup average value change if a user filters the score_category. Is that possible?
Thank you!
Try this measure. It calculates the average score for each ID, and then counts the distinct ID within each bucket.
Average Distinct Count =
VAR vRangeMin =
MAX ( ScoreBucket[RangeMin] )
VAR vRangeMax =
MAX ( ScoreBucket[RangeMax] )
VAR vTable =
ADDCOLUMNS (
VALUES ( FactTable[ID] ),
"@Amount", CALCULATE ( AVERAGE ( FactTable[SCORE_VALUE] ) )
)
VAR vResult =
CALCULATE (
DISTINCTCOUNT ( FactTable[ID] ),
FILTER ( vTable, [@Amount] >= vRangeMin && [@Amount] <= vRangeMax )
)
RETURN
vResult
Proud to be a Super User!
Thank you so much, that works perfectly. And you've given me my next topic to learn! Using variables seems really powerful.
Glad to hear that works. Yes, variables are really powerful. They make coding and troubleshooting easier, especially when multiple levels of logic are involved.
I tweaked the measure in order to handle boundary cases, and it now totals 988. An average of 1, for example, was previously getting included in both the "0-1" and "1-2" buckets. Notice the filter now uses ">" instead of ">=":
Average Distinct Count =
VAR vRangeMin =
MAX ( ScoreBucket[RangeMin] )
VAR vRangeMax =
MAX ( ScoreBucket[RangeMax] )
VAR vTable =
ADDCOLUMNS (
VALUES ( FactTable[ID] ),
"@Amount", CALCULATE ( AVERAGE ( FactTable[SCORE_VALUE] ) )
)
VAR vResult =
CALCULATE (
DISTINCTCOUNT ( FactTable[ID] ),
FILTER ( vTable, [@Amount] > vRangeMin && [@Amount] <= vRangeMax )
)
RETURN
vResult
Proud to be a Super User!
Try this measure:
Distinct Count ID =
VAR vRangeMin =
MAX ( ScoreBucket[RangeMin] )
VAR vRangeMax =
MAX ( ScoreBucket[RangeMax] )
VAR vResult =
CALCULATE (
DISTINCTCOUNT ( FactTable[ID] ),
FactTable[Score_value] >= vRangeMin,
FactTable[Score_value] <= vRangeMax
)
RETURN
vResult
I added an additional row of data for Segment 10 (different ID) to ensure it calculates properly:
Proud to be a Super User!