Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Hi everyone!
I have been working with PBI, practically since the start in 2015, and at this point I'm quite used to build my reports based on a star schema data model, because I know very well this is the best practice within PBI (and I would dare to say, within the current analytcs landscape... but this is another topic 👺).
However, there are specific scenarios that I'd like to learn how to address better, so I can comply to the requests that are thrown at me. 😎
With this in mind, I never knew very well how to deal with large and wide factual aggregated tables and the multiple filtering scenarios that report stakeholders, would like to be able to do on top of numeric fields/columns from such tables.
Giving an example, when the table is very wide, let's say, for instance, 50 numeric fields and 6 categorical fields, recorded by one or more ID field/columns, how would one properly address the fact that the report stackeholders want to filter the table by 15 of those field/columns and present, in a dashboard, a matrix/table visual with some of the table information?
Is there a 'best way' on how to design the data model in order to deal with this?
To clarify:
- of course that one could 'go back' and re-design the data model, in order to normalize the aggregated factual table and have a star schema in place, but the report requirement is really to filter data by the aggregated table numeric fields which is hard to do, I beleive, with the normalized data.
- table schema/structure/columns would be something like this:
id_field1 | ... | id_field4 | decimal_field_1 | ... | decimal_field_50 | categorical_field_1 | ... | categorical_field_6
Well, I hope my message isn't very confusing, otherwise please do let me know, and I'll do my best to explain.
Thanks in advance for your help.
Hi @Sahir_Maharaj !
Thanks for your feedback.
Perhaps I wasn't so clear in my explanation, but what I'm having an hard time with, isn't really in establishing an dynamic metric, but more like on how to design a data model that wouldpropperly respond to a scenario where report stakeholders would like to do arround 40 - 50 indivual filtering on top of a single wide table.
Perhaps you meant something else, in between the lines, and I didn't get it. Please accept my apologies if that's the case and, kindly, let me know exactly what you had in mind, perhaps?
Thank you.
Hello @SergioTorrinha,
Here's a conceptual example to apply dynamic filtering that responds to the selection.
Dynamic Filter Measure =
VAR SelectedField = SELECTEDVALUE(FieldSelectionTable[FieldName], "decimal_field_1")
VAR FilterValue = 100 // This could be dynamically set through another mechanism, like a parameter
RETURN
SWITCH(
SelectedField,
"decimal_field_1", CALCULATE(SUM(Table[decimal_field_1]), Table[decimal_field_1] > FilterValue),
"decimal_field_2", CALCULATE(SUM(Table[decimal_field_2]), Table[decimal_field_2] > FilterValue),
...
"decimal_field_50", CALCULATE(SUM(Table[decimal_field_50]), Table[decimal_field_50] > FilterValue),
BLANK()
)
Just a note - Performance might be impacted with very wide tables and complex dynamic measures, especially on large datasets.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Prices go up Feb. 11th.
Check out the January 2025 Power BI update to learn about new features in Reporting, Modeling, and Data Connectivity.
User | Count |
---|---|
144 | |
85 | |
65 | |
51 | |
45 |
User | Count |
---|---|
217 | |
88 | |
81 | |
65 | |
56 |