Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started
Hello everybody,
for a Data Quality Dashboard I defined some rules, and for each rule I want to monitor what share of data fails the rules. I want to display this over time. It looks like this:
I would expect, that the average value for accuracy is about 16%. Instead, the sum over all quarters is 16,3%.
It takes the amount of failed rows for each quarter and divides it by the amount of total rows of the total selected timespan. Why does it do this? How can I solve the problem?
For completeness: Failed, Passed and FailedRatio are measures defined as:
Failed = DISTINCTCOUNT(fsri_dq_rules[claim_number])
Passed = - [Failed] + CALCULATE(TotalRows[TotalRows];ALLSELECTED())
FailedRatio = [Failed] / ([Failed] + [Passed])
Any help would be much appreciated 🙂
Thanks
Klaus
Hi @Anonymous,
Please try to modify your measure with below format to see if it works:
Passed = - [Failed] + CALCULATE(TotalRows[TotalRows];ALLEXCEPT('TableName','TableName'[Quarter]))
By the way, what is formula of TotalRows[TotalRows]?
Regards,
Yuliana Gu
Check out the September 2024 Power BI update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.
User | Count |
---|---|
111 | |
96 | |
89 | |
38 | |
28 |