Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateJoin us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.
Hi everyone,
I haven't found any good articles on this and maybe it is because its a "It Depends" scenario. So I wanted to have a discussion with you all. What dax functions perform faster than a comparable alternative. For example, I recently learned that SUMX is a row level function and is slower than doing a Calculate(SUM()). The assumption for this discussion is that whatever functions we are comparing CAN be achieved using either method. I just want to know some dax function alternatives that could speed up the performance of my models!
Thanks everyone and look forward to your feedback/responses.
Solved! Go to Solution.
Personally. I think it's more important to figure out your data model before going into DAX. For example when you are creating VAR in a fact table, your DAX Evaluation will be much slower than if you were doing it on a dimension table since the row count is much lower on the dim. Or any other DAX evaluation for that matter. Reading up on star schema will help you there and I think that's one of the major contributors to more efficient evaluations
Another thing I recently disocvered while working with large datasets is using integers to filter rather than string. For example, let's say my dataset has sales and then customers that are either Gold, Silver or Platinum tier. If I want to filter via CALCULATE, rather than using Customer[Tier]="Platinum" or Customer[Tier]="Gold"... I'll just create a custom integer column on transform data where Silver=1, Gold=2, PLatinum=3... and then Customer[Tierint]=1 and so on.
For Large Dataset (1M+ rows), using COUNTROWS & VALUES in combination may put less strain on the DAX engine than DISTINCTCOUNT
This is one point I got from a Udemy Course , I am looking for similar perfromance based alternatives now
Personally. I think it's more important to figure out your data model before going into DAX. For example when you are creating VAR in a fact table, your DAX Evaluation will be much slower than if you were doing it on a dimension table since the row count is much lower on the dim. Or any other DAX evaluation for that matter. Reading up on star schema will help you there and I think that's one of the major contributors to more efficient evaluations
Another thing I recently disocvered while working with large datasets is using integers to filter rather than string. For example, let's say my dataset has sales and then customers that are either Gold, Silver or Platinum tier. If I want to filter via CALCULATE, rather than using Customer[Tier]="Platinum" or Customer[Tier]="Gold"... I'll just create a custom integer column on transform data where Silver=1, Gold=2, PLatinum=3... and then Customer[Tierint]=1 and so on.
Check out the July 2025 Power BI update to learn about new features.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
23 | |
11 | |
10 | |
9 | |
8 |