Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.

Reply
Rdarshana
Helper I
Helper I

Native Query brings all data in tabular visual when currency conversion is applied.

Hi,

I am wondering if anyone has faced this issue before.

I have a SDM where I have a fact table containing data at date level. 
Fact table is joined to the calendar table on Date ( inner join). 
Fact table is joined to the monthly exchange rate table on Exchange rate date (this is the 1st of every month). -- many-to-many join.

Dimension A is joined to fact table on dimension id (inner join).
Exchange Rate table is joined to the To_Currency Code table ( this contains the currency code to which the conversion takes place to). 

My reporting requirement is - 
UI Slicers - Date slicer (from calendar table), Dimension A, and To_Currency Code.

Report contains a table with 15+ attributes from the Fact table and currency conversion logic for the measure. 

What I have seen with testing - 

For a 2 month date slicer, for a given attribute from dimension A, and with no currency conversion logic applied, 
the table displays data upto the lowest level of granularity. 
So, for example, if the date filter is 1/1/2025 - 2/28/2025, the table shows 2 records of data based on the dimension A filter. 

Now, if I apply currency conversion logic, then the table visual throws an error stating that it is returning 1 million records. 

Upon further investigation, I see that there are tonnes of native queries generated in the database ( we are using Databricks). 
One of the query generated pulls in all records from the fact table, and I believe this is the problem. 

Also, when I remove the lowest level of granularity from the table (that is the ID column), the currency conversion metric works just fine. 

Converted Measure =
VAR ConversionStartDate = MIN('Calendar'[Date]) -- Start date from slicer
VAR ConversionEndDate = MAX('Calendar'[Date]) -- End date from slicer
VAR TargetCurrencyCode = SELECTEDVALUE('To_Currency'[Currency Code]) -- Target currency from slicer
VAR SourceCurrency = SELECTEDVALUE('Fact Table'[Currency Code]) -- Source Code is in USD

-- If the target currency is USD, return Total Measure without conversion
RETURN
IF(
TargetCurrencyCode = "USD",
SUM('Fact Table'[Measure1]),
VAR FilteredRates =
FILTER(
'monthly_average_exchange_rate',
'monthly_average_exchange_rate'[From Currency] = SourceCurrency &&
'monthly_average_exchange_rate'[To Currency] = TargetCurrencyCode &&
'monthly_average_exchange_rate'[Rate Date] >= EOMONTH(ConversionStartDate, -1) + 1 &&
'monthly_average_exchange_rate'[Rate Date] <= EOMONTH(ConversionEndDate,-1)+1
)

VAR ConversionRate = MAXX(FilteredRates, 'monthly_average_exchange_rate'[Rate])
RETURN
IF(
NOT ISBLANK(ConversionRate),
SUM('Fact Table'[Measure1]) * ConversionRate,
BLANK()
)
)


Screenshot of the SDM

Rdarshana_0-1750285438882.png

 

If I remove the lowest level of granularity, the conversion logic works. If I add it, the conversion logic throws 1M+ records error. 

If there are supposed to be only 2 records for the given date slicer, then the currency conversion logic should work such that the data for the two records only be calculated or converted. 
In this case, it is pulling in all records as seen in the native query generated. 

Any ideas on how to beat this will help.


11 REPLIES 11
v-hjannapu
Community Support
Community Support

Hi @Rdarshana,

I would also take a moment to thank @grazitti_sapna@vojtechsima  for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
 

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If the response has addressed your query, please Accept it as a solution so that other community members can find it easily.

Thank you

Harshitha.

Hi @Rdarshana,

Just wanted to check if you had the opportunity to review the suggestion provided?

If the response has addressed your query, please Accept it as a solution and give a 'Kudos' so other members can easily find it.

 

Thank You.

Harshitha.

Still testing it,

Hi @Rdarshana,
Thank you for the update.

Please take your time with the testing. If you run into any issues or need further clarification, feel free to reach out.

Regards,
Harshitha.

Hi @Rdarshana,

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please Accept it as a solution so that other community members can find it easily.


Thank you.

grazitti_sapna
Super User
Super User

Hi @Rdarshana,

 

Issue is with The currency conversion logic:

VAR FilteredRates =
FILTER(
'monthly_average_exchange_rate',
...
)
VAR ConversionRate =
MAXX(FilteredRates, 'monthly_average_exchange_rate'[Rate])

 

doesn’t evaluate per date row but is re-evaluated per row in your visual, and because you’ve added the ID field, that evaluation is multiplied dramatically.

Even worse, your model involves:

  • A many-to-many relationship (between Fact and exchange rate table on Exchange Rate Date)

  • Filtering using SELECTEDVALUE() on potentially non-unique combinations (To_Currency, From_Currency, Rate Date)

  • A complex, dynamic filter (>=, <=) for each record

This forces row context → filter context conversion per row in the table, generating large native queries.

 

You can try below methods

 

Refactor the conversion logic using TREATAS, which lets you push filters from calculated values more cleanly:

 

Converted Measure =
VAR ConversionStartDate = MIN('Calendar'[Date])
VAR ConversionEndDate = MAX('Calendar'[Date])
VAR TargetCurrencyCode = SELECTEDVALUE('To_Currency'[Currency Code])
VAR SourceCurrency = SELECTEDVALUE('Fact Table'[Currency Code])
VAR RateDates =
ADDCOLUMNS(
CALENDAR(
EOMONTH(ConversionStartDate, -1) + 1,
EOMONTH(ConversionEndDate, -1) + 1
),
"FromCurrency", SourceCurrency,
"ToCurrency", TargetCurrencyCode
)

VAR FilteredRates =
CALCULATETABLE(
'monthly_average_exchange_rate',
TREATAS(RateDates,
'monthly_average_exchange_rate'[Rate Date],
'monthly_average_exchange_rate'[From Currency],
'monthly_average_exchange_rate'[To Currency]
)
)

VAR ConversionRate = MAXX(FilteredRates, 'monthly_average_exchange_rate'[Rate])

RETURN
IF(
TargetCurrencyCode = "USD",
SUM('Fact Table'[Measure1]),
IF(
NOT ISBLANK(ConversionRate),
SUM('Fact Table'[Measure1]) * ConversionRate,
BLANK()
)
)

 

This ensures rate filtering happens in bulk, not row by row.

 

you can try below as well,

 

Instead of applying exchange rate per record, try:

  • Pre-aggregating measure per month

  • Then doing conversion using a monthly average exchange rate

This drastically reduces the calculation context:

 

Converted Measure (Monthly) =
SUMX(
VALUES('Calendar'[MonthYear]), -- or RateDate
VAR MonthlyTotal =
CALCULATE(SUM('Fact Table'[Measure1]))
VAR Rate =
CALCULATE(
MAX('monthly_average_exchange_rate'[Rate]),
FILTER('monthly_average_exchange_rate', ...)
)
RETURN MonthlyTotal * Rate
)

 

 

🌟 I hope this solution helps you unlock your Power BI potential! If you found it helpful, click 'Mark as Solution' to guide others toward the answers they need.
💡 Love the effort? Drop the kudos! Your appreciation fuels community spirit and innovation.
🎖 As a proud SuperUser and Microsoft Partner, we’re here to empower your data journey and the Power BI Community at large.
🔗 Curious to explore more? [Discover here].
Let’s keep building smarter solutions together!


 

@grazitti_sapna 
I tested the two measures. The currency conversion logic is working as such. 
But, when I add an ID (lowest level of granularity of the table), I see that the power bi sends the same error in the table visual - 
Error fetching data for this visual. The resultset of a query to external dataset has exceeeded the maximum allowed size of 1million rows.

So, now my question is - 
if we keep the monthly conversion logic as is, then do you think it would make sense to break the many-to-many join between the fact table and the monthly exchange rate table?

Update - I broke the many to many join between the monthly exchnage rate and the fact table by including two bridge tables. So now the joins are all one-to-many. 

But inspite of the making this change, and adding the conversion logic for monthly aggregation, I am still seeing the 1 million records issue with the table visual when the ID column is included in. 

HI @Rdarshana,

Power BI's external dataset query limit = 1 million rows

When you use DirectQuery, and include the ID (lowest granularity) in your visual, Power BI generates a query that returns one row per ID per date (or per measure). This can easily exceed 1 million rows, especially if:

  • You have many IDs (e.g., transactions, invoice lines, etc.)

  • You combine that with date or currency dimensions

  • Even if you're aggregating monthly, including ID in the visual disables pre-aggregation optimization

Below are few suggestions

1. Avoid Using ID in Visuals Unless Necessary

  • The root of the 1M row overflow is not in the measure logic anymore — it's the visual trying to retrieve too many rows.

  • Try to summarize the data before adding ID to the visual.

  • If you must show ID, implement pagination, filters, or drill-through.

2. Use Aggregated Tables

If your model allows it, materialize pre-aggregated tables (e.g., monthly totals per currency per ID or product), then use those tables in visuals:

  • Create a new aggregated table in Power BI using DAX or in the source DB:

AggregatedFact =
SUMMARIZE(
'Fact Table',
'Fact Table'[ID],
'Calendar'[Month],
'Fact Table'[Currency Code],
"TotalMeasure", SUM('Fact Table'[Measure1])
)

  • Then, apply currency conversion on top of this pre-aggregated table.

3. Use Drill-through Instead of One Big Table

Rather than allowing a flat table at full granularity, design your report as:

  • Overview page: summary by month or product

  • Drill-through page: show ID-level detail only when context is selected

 

vojtechsima
Super User
Super User

Hello, @Rdarshana ,

I don't like the Many To Many, especially if it's set up to both (direction),

I would consider changing the Rate Table and generate some surrogate key (SK), so you can join your fact table with One To Many (exchance -> fatct).

 

Perhaps from the date range, you can create row for each date and add a valid currency to it and create the SK from it, same way then in your fact table. Perhaps that may help.

@vojtechsima 

I created two bridge tables between the monthly exchange rate table and the fact table. 
1. DistinctRateDates table (Joined to Exchange Rate table - one to many join on Rate Date)

DistinctRateDates =
DISTINCT(SELECTCOLUMNS('monthly_average_exchange_rate', "Rate Date", 'monthly_average_exchange_rate'[Rate Date]))
2. DateBridgeTable (Joined to DistinctRateDates table - many to one ; Joined to Fact table - one to many)
DateToRateBridge =
SELECTCOLUMNS (
    'Spend Calendar',
    "Date", 'Spend Calendar'[Date],
    "Rate Date", DATE(YEAR('Spend Calendar'[Date]), MONTH('Spend Calendar'[Date]), 1)
)


After this, I applied the monthly calculation for currency conversion. The logic works, but the moment I add the ID field, it just blows up with the 1 millions records error message. 
The ID field is a must to the tabular visual. 
Is there anything else that could be looked into?

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.