The ultimate Microsoft Fabric, Power BI, Azure AI, and SQL learning event: Join us in Stockholm, September 24-27, 2024.
Save €200 with code MSCUST on top of early bird pricing!
Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started
Hello everybody,
for a Data Quality Dashboard I defined some rules, and for each rule I want to monitor what share of data fails the rules. I want to display this over time. It looks like this:
I would expect, that the average value for accuracy is about 16%. Instead, the sum over all quarters is 16,3%.
It takes the amount of failed rows for each quarter and divides it by the amount of total rows of the total selected timespan. Why does it do this? How can I solve the problem?
For completeness: Failed, Passed and FailedRatio are measures defined as:
Failed = DISTINCTCOUNT(fsri_dq_rules[claim_number])
Passed = - [Failed] + CALCULATE(TotalRows[TotalRows];ALLSELECTED())
FailedRatio = [Failed] / ([Failed] + [Passed])
Any help would be much appreciated 🙂
Thanks
Klaus
Hi @Anonymous,
Please try to modify your measure with below format to see if it works:
Passed = - [Failed] + CALCULATE(TotalRows[TotalRows];ALLEXCEPT('TableName','TableName'[Quarter]))
By the way, what is formula of TotalRows[TotalRows]?
Regards,
Yuliana Gu
Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.
Check out the August 2024 Power BI update to learn about new features.
User | Count |
---|---|
107 | |
78 | |
72 | |
46 | |
39 |
User | Count |
---|---|
135 | |
108 | |
69 | |
64 | |
56 |