Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredPower BI is turning 10! Let’s celebrate together with dataviz contests, interactive sessions, and giveaways. Register now.
Hi,
I am wondering if anyone has faced this issue before.
I have a SDM where I have a fact table containing data at date level.
Fact table is joined to the calendar table on Date ( inner join).
Fact table is joined to the monthly exchange rate table on Exchange rate date (this is the 1st of every month). -- many-to-many join.
Dimension A is joined to fact table on dimension id (inner join).
Exchange Rate table is joined to the To_Currency Code table ( this contains the currency code to which the conversion takes place to).
My reporting requirement is -
UI Slicers - Date slicer (from calendar table), Dimension A, and To_Currency Code.
Report contains a table with 15+ attributes from the Fact table and currency conversion logic for the measure.
What I have seen with testing -
For a 2 month date slicer, for a given attribute from dimension A, and with no currency conversion logic applied,
the table displays data upto the lowest level of granularity.
So, for example, if the date filter is 1/1/2025 - 2/28/2025, the table shows 2 records of data based on the dimension A filter.
Now, if I apply currency conversion logic, then the table visual throws an error stating that it is returning 1 million records.
Upon further investigation, I see that there are tonnes of native queries generated in the database ( we are using Databricks).
One of the query generated pulls in all records from the fact table, and I believe this is the problem.
Also, when I remove the lowest level of granularity from the table (that is the ID column), the currency conversion metric works just fine.
Converted Measure =
VAR ConversionStartDate = MIN('Calendar'[Date]) -- Start date from slicer
VAR ConversionEndDate = MAX('Calendar'[Date]) -- End date from slicer
VAR TargetCurrencyCode = SELECTEDVALUE('To_Currency'[Currency Code]) -- Target currency from slicer
VAR SourceCurrency = SELECTEDVALUE('Fact Table'[Currency Code]) -- Source Code is in USD
-- If the target currency is USD, return Total Measure without conversion
RETURN
IF(
TargetCurrencyCode = "USD",
SUM('Fact Table'[Measure1]),
VAR FilteredRates =
FILTER(
'monthly_average_exchange_rate',
'monthly_average_exchange_rate'[From Currency] = SourceCurrency &&
'monthly_average_exchange_rate'[To Currency] = TargetCurrencyCode &&
'monthly_average_exchange_rate'[Rate Date] >= EOMONTH(ConversionStartDate, -1) + 1 &&
'monthly_average_exchange_rate'[Rate Date] <= EOMONTH(ConversionEndDate,-1)+1
)
VAR ConversionRate = MAXX(FilteredRates, 'monthly_average_exchange_rate'[Rate])
RETURN
IF(
NOT ISBLANK(ConversionRate),
SUM('Fact Table'[Measure1]) * ConversionRate,
BLANK()
)
)
Screenshot of the SDM
If I remove the lowest level of granularity, the conversion logic works. If I add it, the conversion logic throws 1M+ records error.
If there are supposed to be only 2 records for the given date slicer, then the currency conversion logic should work such that the data for the two records only be calculated or converted.
In this case, it is pulling in all records as seen in the native query generated.
Any ideas on how to beat this will help.
Hi @Rdarshana,
Issue is with The currency conversion logic:
VAR FilteredRates =
FILTER(
'monthly_average_exchange_rate',
...
)
VAR ConversionRate =
MAXX(FilteredRates, 'monthly_average_exchange_rate'[Rate])
doesn’t evaluate per date row but is re-evaluated per row in your visual, and because you’ve added the ID field, that evaluation is multiplied dramatically.
Even worse, your model involves:
A many-to-many relationship (between Fact and exchange rate table on Exchange Rate Date
)
Filtering using SELECTEDVALUE()
on potentially non-unique combinations (To_Currency
, From_Currency
, Rate Date
)
A complex, dynamic filter (>=
, <=
) for each record
This forces row context → filter context conversion per row in the table, generating large native queries.
You can try below methods
Refactor the conversion logic using TREATAS
, which lets you push filters from calculated values more cleanly:
Converted Measure =
VAR ConversionStartDate = MIN('Calendar'[Date])
VAR ConversionEndDate = MAX('Calendar'[Date])
VAR TargetCurrencyCode = SELECTEDVALUE('To_Currency'[Currency Code])
VAR SourceCurrency = SELECTEDVALUE('Fact Table'[Currency Code])
VAR RateDates =
ADDCOLUMNS(
CALENDAR(
EOMONTH(ConversionStartDate, -1) + 1,
EOMONTH(ConversionEndDate, -1) + 1
),
"FromCurrency", SourceCurrency,
"ToCurrency", TargetCurrencyCode
)
VAR FilteredRates =
CALCULATETABLE(
'monthly_average_exchange_rate',
TREATAS(RateDates,
'monthly_average_exchange_rate'[Rate Date],
'monthly_average_exchange_rate'[From Currency],
'monthly_average_exchange_rate'[To Currency]
)
)
VAR ConversionRate = MAXX(FilteredRates, 'monthly_average_exchange_rate'[Rate])
RETURN
IF(
TargetCurrencyCode = "USD",
SUM('Fact Table'[Measure1]),
IF(
NOT ISBLANK(ConversionRate),
SUM('Fact Table'[Measure1]) * ConversionRate,
BLANK()
)
)
This ensures rate filtering happens in bulk, not row by row.
you can try below as well,
Instead of applying exchange rate per record, try:
Pre-aggregating measure per month
Then doing conversion using a monthly average exchange rate
This drastically reduces the calculation context:
Converted Measure (Monthly) =
SUMX(
VALUES('Calendar'[MonthYear]), -- or RateDate
VAR MonthlyTotal =
CALCULATE(SUM('Fact Table'[Measure1]))
VAR Rate =
CALCULATE(
MAX('monthly_average_exchange_rate'[Rate]),
FILTER('monthly_average_exchange_rate', ...)
)
RETURN MonthlyTotal * Rate
)
🌟 I hope this solution helps you unlock your Power BI potential! If you found it helpful, click 'Mark as Solution' to guide others toward the answers they need.
💡 Love the effort? Drop the kudos! Your appreciation fuels community spirit and innovation.
🎖 As a proud SuperUser and Microsoft Partner, we’re here to empower your data journey and the Power BI Community at large.
🔗 Curious to explore more? [Discover here].
Let’s keep building smarter solutions together!
Hello, @Rdarshana ,
I don't like the Many To Many, especially if it's set up to both (direction),
I would consider changing the Rate Table and generate some surrogate key (SK), so you can join your fact table with One To Many (exchance -> fatct).
Perhaps from the date range, you can create row for each date and add a valid currency to it and create the SK from it, same way then in your fact table. Perhaps that may help.
User | Count |
---|---|
84 | |
76 | |
74 | |
48 | |
39 |
User | Count |
---|---|
114 | |
56 | |
51 | |
42 | |
42 |