Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hi Guys, maybe someone will be able to help.
So I have a measure in my visual, this is kind of statistical measure (Percentile), calculated on value surv_model_score where results are affected by slicers applied (in general, by applying slicers we decrease population volume -> number of rows*), so I had to use Allselected() function to include filtering dimensions in context. Here are my calcs:
> rank_survival= RANK(
then I wanted to filter by these measure Long Lasting Rating by using it in slicer, however using measures in slicers is not possible. So I have followed instructions in this video. Below are calcs built basing on video
> LLR_COMMUNITY_check_ = IF([Long Lasting Rating]="★","Promoted","Not Promoted")
> LLR_COMMUNITY_filter =
+ table "LLR_MANUAL_SLICER created with values as following:
Everything is fine, until last step when I put my new measure into vizual filter and set as "1", then my Long Lasting Rating get broken and each record has a "star" which is incorrect state.
Am I missing something? Unfortunately I cannot put LLR_COMMUNITY_filter measure in Allselected() function what potentially would fix Long Lasting Rating calc. BTW filtering works perfectly fine, so when I choose "Promoted" from a slicer only records originally having "star" are displayed rest are filtered out. Issue is when no selection in "Promoted/Not Promoted" slicer is done or if I select "Not Promoted" - I don't want to have "stars" displayed for all rows in these scenarios.
Hi @Szokens,
Thanks for reaching out to the Microsoft fabric community forum.
It looks like you are facing an issue when you tried to use measures in slicers. As @DataNinja777 has already responded to your query and you followed up with another query where after applying the formula you get another error which prevents you from saving new formula. Can you please confirm if your issue has been resolved or where you able to move forward with the help of @DataNinja777 response.
I would also take a moment to thank @DataNinja777, for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
Community Support Team
If this post helps then please mark it as a solution, so that other members find it more quickly.
Thank you.
yes so solution from @DataNinja777 is correct in very simplified word however it is still not working for my original table visual I am working on. Looks like one of the measures I already have on visual (using rank() function), prevent from solution of @DataNinja777 to work at all.
Hi @Szokens,
When multiple measures using ALLSELECTED() or RANKX() are used in the same visual, especially when one of them is also being used in visual-level filters (even indirectly via slicers), they can interfere with each other’s evaluation context.
In your case, the Long Lasting Rating measure works fine on its own, but when combined with the second rank measure (based on Model score weighted), the context for calculating the LLR percentile gets unintentionally narrowed—likely because both measures rely on ALLSELECTED over the same table, but the slicer and ranking interact in unexpected ways.
To fix this, you can try this approach:-
Decouple the LLR calculation from the visual context by hardcoding the base table in the percentile logic instead of relying on ALLSELECTED. Replace this "RANKX(ALLSELECTED('YourTable'), [Score])" with "RANKX(ALL('YourTable'), [Score])" or even better, define a custom virtual table that includes only the rows you need, filtered appropriately outside the visual context.
Inside the Long Lasting Rating measure, evaluate the percentile logic in a separate variable that uses the full dataset (not affected by slicers), and then apply your filtering/slicer condition only at the end of the measure.
That way, the slicer only controls what's displayed and not how the underlying calculation behaves.\
Try this:
VAR FullData = ALL('YourTable')
VAR RankVal = RANKX(FullData, [surv_model_score], , DESC)
VAR CountVal = COUNTROWS(FullData)
VAR BasePercentile = DIVIDE(RankVal - 1, CountVal - 1, 0)
VAR LLR = IF(ROUND(5 - (BasePercentile * 4), 0) >= 4, "★", " ")
VAR SlicerSelection = SELECTEDVALUE('LLR_MANUAL_SLICER'[LLR Status])
RETURN
IF(
ISBLANK(SlicerSelection),
LLR,
IF([LLR_COMMUNITY_check] = SlicerSelection, LLR, BLANK())
)
This keeps the percentile logic stable, even if other measures in the visual are using ALLSELECTED.
Best Regards,
Hammad.
Hi @Szokens,
As we haven’t heard back from you, so just following up to our previous message. I'd like to confirm if you've successfully resolved this issue or if you need further help.
If yes, you are welcome to share your workaround and mark it as a solution so that other users can benefit as well. If you find a reply particularly helpful to you, you can also mark it as a solution.
If you still have any questions or need more support, please feel free to let us know. We are more than happy to continue to help you.
Thank you for your patience and look forward to hearing from you.
hi @v-mdharahman I will check on Monday as I am out of office currentlky and don't have access to resources. Will for sure give my feedback!
Hi @Szokens.,
Hope everything’s going smoothly on your end. We haven’t heard back from you, so I wanted to check if the issue got sorted. If yes, marking the solution would be awesome for others who might run into the same thing.
Still stuck? No worries just drop us a message and we can jump back in on the issue.
Thank you,
Hammad.
Yes I have tried your approach however I got an error like below, I had to put an average on surv_model_score, then no error popup
However it doesn't work on simple vizual anyway, only if I slice data significantly it works.
I have doubts it will work as expected anyway. Context is quite important in my report when looking at Long Lasting Rating:
Data source containing a lot of pairs (person-team) which have two scores. Productivity score (Model score weighted) and our surv_model_score. Surv_model_score tells us whether given pair is likely to cooperate for a long time or pair is likely collapse soon.
When user is entering on dashboard he pickup person name, user see all pairs Selected Person - Team (with different surv_model_scores) and Long Lasting Rating will be helpful for user to which pair is promoted as "stable" pair, for given user filter context.
Moreover, if we apply some additional filters, then we throw out few teams, and the percentile calculation shall be updated by these additional contexts, so someone who was not promoted initally, could be promoted now within additional filter ie. Distance between 0 - 200 miles.
Upon all of this our "Rank" field is also present, which tell user about productivity of given Person-Team pair, by sorting results by itself.
I have another idea, If if get rid of including in the view below "Rank" field:
> Rank =RANK(
DENSE,
ALLSELECTED(act_source[Signature team ID],act_source[Signature Team Name],act_source[Distance],act_source[Divisions],act_source[FA count],act_source[Average Production Category],act_source[FA Name], act_source[Average AuM], act_source[Average NYSE LoS],act_source[Submitted_Bio_Team],act_source[fa_demo],act_source[Average AuM Category],act_source[Average UBS LoS]),
ORDERBY([Model score weighted], desc,act_source[Signature team ID], ASC, act_source[Signature Team Name], desc)))
and replace it only by below measure, but sorted in the view descending, can I somehow get "LLR Status" slicer working as expected? Currently when I try to include below measure in the view, the view is not reacting on "LLR Status" slicer selections.
> Model score weighted = [Productivity table teams Value] / 100 * AVERAGE(act_source[team_prod_model_score]) + (100 - [Productivity table teams Value])/100 * AVERAGE(act_source[fa_prod_model_score])"
Hi @Szokens,
From what you’ve described, your use case absolutely requires context-aware percentile calculations, since users filter down to specific FA-Team pairs and expect Long Lasting Rating to adapt accordingly. Given this, you’re right to be cautious about any changes that remove important context (like the ALLSELECTED scope you originally used).
Now regarding the current issue you're experiencing “Query has exceeded the available resources”, likely due to the complexity of calculating percentile ranks over a large dataset especially with context-sensitive filters and two ranking systems (surv_model_score and Model score weighted). When switching from Rank to just showing Model score weighted, you’re observing that the LLR Status slicer doesn’t seem to filter correctly.
Even though you're only using Model score weighted now, the LLR logic (especially with your manual slicer approach) still depends on how the DAX engine is allowed to evaluate the measure. If you're sorting the visual using Model score weighted without any associated ranking DAX, the table context might be flattening in a way that breaks your manual slicer’s interaction.
You can try few approaches like:
* If your visual is only sorted by Model score weighted, try removing the sorting temporarily to test if the slicer reacts again. If it does, the issue is likely due to how sorting forces context evaluation on the full visual.
* Instead of sorting the table directly by the Model score weighted measure, create a separate rank measure based on it (but don't show this rank in the visual only use it for sorting)
Model_Score_Rank =
RANKX(
ALLSELECTED(act_source[Signature team ID], act_source[Signature Team Name], act_source[FA Name]),
[Model score weighted],
,
DESC,
DENSE
)
Then add Model_Score_Rank to your visual and sort the visual by Model_Score_Rank.Hide the column if needed
This will preserve the correct context and slicing, while still achieving the sorted view you need.
* To reduce query strain, avoid recomputing full percentile logic in multiple places. If needed, pre-compute some values as calculated columns (if the dataset is not too dynamic).
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
hi @v-mdharahman what do you mean by this in bold?
"* Instead of sorting the table directly by the Model score weighted measure, create a separate rank measure based on it (but don't show this rank in the visual only use it for sorting)"
thanks,
Daniel
Hi @Szokens,
Let me clarify what I meant. Right now, you are sorting the table directly by the Model score weighted measure. That forces Power BI to re-evaluate the measure for every row and every filter combination, which is what’s making the LLR Status slicer unreliable.
What I’m suggesting instead is create a ranking measure based on Model score weighted. For example:
Model_Score_Rank =
RANKX(
ALLSELECTED(act_source[Signature team ID], act_source[Signature Team Name], act_source[FA Name]),
[Model score weighted],
,
DESC,
DENSE
)
This gives each row a rank number according to its Model score weighted. Add this "Model_Score_Rank" to your visual, then use the “Sort by column” option to sort the table by this rank instead of directly by Model score weighted. Once the table is sorted correctly, you don’t actually need to keep the rank column visible. You can simply hide it from the visual so the user only sees the fields they care about. Sorting will still work in the background.
This way, the table is sorted properly, but you avoid breaking the interaction with the LLR Status slicer.
Best Regards,
Hammad.
Hi @Szokens,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided by the user's for the issue worked? or let us know if you need any further assistance.
Thanks and regards,
Anjan Kumar Chippa
Hi @Szokens,
We noticed there hasn’t been any recent activity on this thread. If your issue is resolved, marking the correct reply as a solution would be a big help to other community members.
If you still need support, just reply here and we’ll pick it up from where we left off.
Best Regards,
Hammad.
You are the BOSS 🙏 we are amlost there.
after applying your fixed formula i got following error, pretending me from saving new formula (fyi you can ignore "fa-team" pharse in measure names, I am deleting it where I can for cleansing but might omit to do so):
However fixed when I changed above measure code from:
> LLR_COMMUNITY_check = IF([Long Lasting Rating]="★","Promoted","Not Promoted")
to:
after that Long Lasting Rating updated formula was able to be saved. Everything works perfectly fine on almost blank worksheet:
However, in my original visual, I got another ranking which seems to pretend from execution work of LLR Status slicer:
This rank is based on different score, and main table is actually sorted on this rank. Here are calcs:
> Rank =RANK(
DENSE,
ALLSELECTED(act_source[Signature team ID],act_source[Signature Team Name],act_source[Distance],act_source[Divisions],act_source[FA count],act_source[Average Production Category],act_source[FA Name], act_source[Average AuM], act_source[Average NYSE LoS],act_source[Submitted_Bio_Team],act_source[fa_demo],act_source[Average AuM Category],act_source[Average UBS LoS]),
ORDERBY([Model score weighted], desc,act_source[Signature team ID], ASC, act_source[Signature Team Name], desc)))
> Model score weighted = [Productivity table teams Value] / 100 * AVERAGE(act_source[team_prod_model_score]) + (100 - [Productivity table teams Value])/100 * AVERAGE(act_source[fa_prod_model_score])"
Parameter values in above formula could be 0, 50 or 100 and it depends on selection of another slicer present on report page (values come from calculated table as below).
Productivity table teams = GENERATESERIES(0, 100, 50)
Can we somehow workaround this additionally?
Hi @Szokens ,
The issue you're facing stems from the fact that when you apply the LLR_COMMUNITY_filter = 1 measure as a visual-level filter, it modifies the filter context before your Long Lasting Rating measure runs, which disrupts the percentile ranking logic. This results in every row receiving a star because only the rows that match the filter are left, skewing the rank and count calculations. To solve this, you should not filter the visual directly with LLR_COMMUNITY_filter. Instead, embed that logic inside a new version of the Long Lasting Rating measure so that it respects the slicer selection without breaking the calculation context of the percentile. Here’s the revised measure:
Long Lasting Rating Fixed =
VAR BasePercentile =
DIVIDE([rank_survival_fa-team_tests] - 1, [count_rows_survival_fa-team_test] - 1, 0)
VAR IsStar = IF(ROUND(5 - (BasePercentile * 4), 0) >= 4, 1, 0)
VAR FilterStatus = SELECTEDVALUE(LLR_MANUAL_SLICER[LLR Status])
RETURN
IF(
ISBLANK(FilterStatus),
IF(IsStar = 1, "★", " "),
IF([LLR_COMMUNITY_filter] = 1, IF(IsStar = 1, "★", " "), BLANK())
)
This approach ensures that your percentile calculations use the full, correctly filtered dataset based on the slicers applied via ALLSELECTED, while still respecting user selection from the LLR slicer for visibility. If nothing is selected in the manual slicer, the stars show according to actual logic. If "Promoted" or "Not Promoted" is selected, the result aligns accordingly without polluting the ranking math.
Best regards,