Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.

Reply
KjetilAB
Frequent Visitor

PBI Embedded performance problem

Hi.
This is my first post here and i couldn't find this issue anywhere.

I have implemented a PBI Embedded solution for our external customers. It serves about 100k users.

It is a solution in a composit model with RLS. Main PBI report uses only aggregated data in a imported model. Navigation here is good and responsive even though the datasize is pretty huge.

My problem starts when the users is supposed to drillthrough to a report page that is on a Direct Query agains a detaild Azure SQL DB containing about 1 billion rows.
Running this report through Desktop and PBI reporting service it performs super fast, and returns the detailed report super fast. The underlying DB is tuned and indexed with the correct indexes. Then i switch over and runs this as a "customer" through our customer portal.

First of all, Embedded does change the SQLs sent to the DB compared to the 2 other ways of running this report. Guess this has to do with quey folding and not doing it through Embedded.

So instead of running the queries i have tuned for it splits up and runs a few extra queries. OK, i'll tune for those too, and by themselves i manage to get them to perform well.

Now i hit the actual problem. For som wierd reason the embedded solution generates an extra script that produces a matrix of all possible combinations of 3 of the columns(attributes) for the transaction. And it doesn't bother looking at any other limitations as a where clause other than what is spesified as the RLS. So it returns behind the scene several million combinations even though i only want 1 of those combinations. The drillthrough already has this limitations, and it works fine for Desktop and Service. The where clause is complete for those 2. But Embedded makes this extra really massive list, then it runs a version of the 2 original queries. ! query for the sum field wich is fast, an d one for the actual transaction list wich can return all from 1 to about 150k rows. This is also fast. Then he limits the 1 really heavy matrix result to match to the transaction list, and return the table i want. This table can then then be exported or just checked for specific event dependant on user demand.

How can i get embedded to skip the matrix generating step which ruins the whole solution. Without it this will all perform slmost instant, even for huge lists.

I have read something about, if i can set the drillthrough to set filters on the visual it limits the query, but i am only able to set the drillthrough filters to be applied to the page.

Anyone that has experienced this and have a solution. I need to get the response time within acceptable duration, and i feel it is doable, but cant find a way to affect the direct query script.

1 ACCEPTED SOLUTION

Hi.

This was a bit of a headace to get it to work as it should. I have now managed to fins a solution that is working with a acceptable responsetime.

Case:
Get decent performance on a Power BI Embedded solution embedded through a frame on our company portal for about 100k external users.
PBI report uses a Hybrid model of aggregated data and some dimensions that is a imported dataset. Contains 2 table of about 15 million rows, and a few dimesions like date, merchant, transactiontype etc.
The problem was that from the aggredated data the users should be able to drill down to detailed transaction level. This drillthrough is to a Direct Query against a Azure SQL DB containing about 1 billion rows with fairly detailed information. And from there the users should be able to download the details to a file for import into their reconsiliation systems.

Problem:

I was not able to get the Embedded report to perform to acceptable performance levels. Everything worked nice and very responsive through Power BI Desktop and PBI Reporting Services. It was easy to get it to work through those solutions since the scripts sent to the DB was simple and PBI did what was exptected with query folding and sent limited amount of sqls down to the DB. Problem was when it was ran throught the frame in our company portal. This report also uses Row Level Sequrity to add another level of complexity.

There where several issues here. First it did not send the same queries as it did through the service and desktop. It sent several queries and returned insane amount of data that wasn't needed in the report.

I found that the main reason for this generation of unnessasary queries was due to the fact that in the list on the Direct Query report i had used a few dimensional fields. This causes the engine to create several queries to create the list. I created a massive matrix of all possible combinations of the dimension data which it pulled from the DB, even though it only needed 1 combination which was already defined in the drillthrough filters. It used this matrix to filter against the filters set in the report, but this killed the performance, and it was really hard to tune the indexes correctly due to the several combinations of scripts sent, bet the inmodel work was what killed performance.

Solution:
I ended up trimming the data in the Direct Query source. This is always a good solution, but i also had to add a few fields here.
The solution was to force PBI to send only 1 query and actually listen to the filters you have set in the drillthrough to be used. I managed to change the driect query report to ONLY use fields from the DQ source. By doing this i was able to get PBI to only send 1 query even when ran through embedded. Now i was able to tune the indexes perfectly. Now the users are able to serach through detailed transactions, produce lists connected to certain settlement sessions, or produce a list af all transactions for a given month. Merchants with only a few thousand rows gets instant response, and merchants with around 100k transactions gets responce on 2-3 sec.

So to be able to solve this issue, be sure to only use data from one datasource, and especially not mix with the imported model.

@v-bmanikante suggested splitting into 2 reports. This would in theroy have fixed this problem since the DQ report would probably only had data from that datasource. But i was reluctant to do this since the report already had pased PEN test, and didn't want to hire in a new PEN test team due to implementation of a second report. And not sure the URL query string filter would have passed PEN test.
Hope this gives a few hints for others with similar issues to know what to look for in their solution.

Regards,
Kjetil Blomsøy

View solution in original post

8 REPLIES 8
v-bmanikante
Community Support
Community Support

Hi @KjetilAB ,

 

Thank you for reaching out to Microsoft Fabric Community Forum.

 

Since the report works fine in Power BI Desktop and Service, the embedded version generates extra queries, including a heavy matrix of all combinations, which slows down performance,

You can try the below workarounds which help to improve performance.

  • Limit the number of visuals on the drill through page to only those necessary. Each visual can generate its own query, so reducing the number can help improve performance.
  • Use aggregations to reduce the data being queried( I think you already implemented it).
  • Optimize DAX measures for better Direct Query performance.
  • Instead of page-level filters, consider applying filters directly to visuals where possible. This can help reduce the scope of data queried.
  • Use Performance Analyzer to check the performance of the queries or visuals.

Optimization guide for Power BI - Power BI | Microsoft Learn

Monitor report performance in Power BI - Power BI | Microsoft Learn

Best practices for faster performance in Power BI embedded analytics - Power BI | Microsoft Learn

Troubleshoot DirectQuery models in Power BI Desktop - Power BI | Microsoft Learn

Use DirectQuery in Power BI Desktop - Power BI | Microsoft Learn

 

Similar issues:

Matrix Performance issue in Direct query - Microsoft Fabric Community

Solved: How drill through works with direct query - Microsoft Fabric Community

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!

 

Regards,

B Manikanteswara Reddy

Hi. Thanks for providing some tips, but i have tested most of these.

  • Limit the number of visuals on the drill through page to only those necessary. Each visual can generate its own query, so reducing the number can help improve performance.
    - The report only have 1 visual, and that is a table.
  • Use aggregations to reduce the data being queried( I think you already implemented it).
    Main report which drillthrough is initiated from is aggregated data and performing well since that part is imported data.
  • Instead of page-level filters, consider applying filters directly to visuals where possible. This can help reduce the scope of data queried.
    How can you on a drillthrough put the filters as visual filters? They are automatically assigned to Page level filters. This is the last trick i have tried to sort out, becaus i also think this will solve the problem. But i havent been able to assigned the inherited filters to the visual.
  • Use Performance Analyzer to check the performance of the queries or visuals.
    -Have used it alot, and the problem is not the same as when report is ran through it's embedded frame on the web portal. And Embedded generates another set of SQL sent to the server if the report is ran throught web portal(not PBI Service)

So unless i am able to remove that generate a "matrix" result for data which is not relevant, i-m not sure how to solve this.



Hi @KjetilAB ,

 

Yes, you're right,

  • When a drill through is performed in Embedded, filters are applied at the page level, and the engine attempts to generate visual interactivity metadata upfront, leading to queries that include all potential values , this might explain the huge matrix query you're seeing.
  • Desktop/Service can optimize better for interactive sessions, often skipping redundant metadata generation when the user context is clearer.

While drill through filters are automatically applied at the page level, you can try this workaround:

 

  1. Create a manual parameter or disconnected table that represents the drill through filter values (e.g., Customer ID or Transaction ID).
  2. Bind that field to your main table visual via a calculated table or DAX measure that uses SELECTEDVALUE() from the parameter table.
  3. When initiating the drill through, pass the filter into the parameter table instead of relying on page-level filters.
  4. Then manually filter the visual using a DAX measure that restricts data based on that parameter, avoiding page-level filtering altogether.

Like below:

Filtered Transactions =
FILTER (
    'TransactionTable',
    'TransactionTable'[CustomerID] = SELECTEDVALUE('DrillthroughParameter'[CustomerID])
)

 

As a more architectural approach, consider breaking the drill through page into a separate report, and link to it through a URL with query string filters. This avoids embedding the filters in the page model and allows the separate report to start clean, filtering visuals directly. You can preload only what’s needed, improving performance drastically for Direct Query scenarios with large datasets.

 

Since this behaviour is specific to Embedded, and you’re observing material differences in SQL generation compared to Service/Desktop, this may be a case where Microsoft support should be looped in , especially if the query structure generated by Embedded disregards visual/page filter scope. They can assist with telemetry traces and possibly flag it for engineering.

https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!

 

Regards,

B Manikanteswara Reddy

 

 

Hi @KjetilAB ,

 

We wanted to kindly follow up to check if the solution provided for the issue worked? or Let us know if you need any further assistance?

 

If our response addressed, please mark it as Accept as solution and click Yes if you found it helpful.

 

Please don't forget to give a "Kudos vbmanikante_0-1748147390458.png" – I’d truly appreciate it!

 

Regards,

B Manikanteswara Reddy

Hi @KjetilAB ,

 

As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for the issue worked? or Let us know if you need any further assistance?

If our response addressed, please mark it as Accept as solution and click Yes if you found it helpful.

 

Please don't forget to give a "Kudos vbmanikante_1-1748422059802.png" – I’d truly appreciate it!

 

Regards,

B Manikanteswara Reddy

 

Hi @v-bmanikante 

Sorry for not answering sooner, but i have been trying the different tips you provided.

Sorry to say, but none of them have helped me.
The tips about a disconnected table(calcultes table). I was not able to do this since the calculated table is calculated on process time, and the parameters is set on the fly. Was not able to use this, unless i did it massivly wrong.

I was able to create e version of the Filtered transaction measure and put it into the visual filter. Problem is that when ran through the embedded frame the Direct Query SQLs provided stil doesn't limit the result. It still generates the massive matrix result which is really the one that sloves this down.

I am now trying to slim down the model to a bare minimum to se if that helps with compression and splitting the date and time fields. I will provide a feedback here when i find what works, cus i haven't given up yet.

The biggest problem is that you tune the database for the report when you create it and run it through desktop or Power bi service, but when ran through the frame as embedded it provides totaly different sql, and it produses more SQLs.


Hi @KjetilAB ,

 

Thank you so much for the detailed follow up, we really appreciate you for taking the time to try out the suggestions despite your busy schedule.

 

You're absolutely on the right track by slimming down the model and experimenting with separating the date and time fields. These optimizations often help with performance under embedding scenarios.

 

Please do keep us posted on your progress , your findings could be incredibly valuable for others facing similar issues. And if you manage to find a working solution or workaround, we'd be happy to help you refine it further.

 

Looking forward to hearing back from you!

 

Please don't forget to give a "Kudos vbmanikante_0-1748509200977.png" – I’d truly appreciate it!

 

Regards,

B Manikanteswara Reddy

Hi.

This was a bit of a headace to get it to work as it should. I have now managed to fins a solution that is working with a acceptable responsetime.

Case:
Get decent performance on a Power BI Embedded solution embedded through a frame on our company portal for about 100k external users.
PBI report uses a Hybrid model of aggregated data and some dimensions that is a imported dataset. Contains 2 table of about 15 million rows, and a few dimesions like date, merchant, transactiontype etc.
The problem was that from the aggredated data the users should be able to drill down to detailed transaction level. This drillthrough is to a Direct Query against a Azure SQL DB containing about 1 billion rows with fairly detailed information. And from there the users should be able to download the details to a file for import into their reconsiliation systems.

Problem:

I was not able to get the Embedded report to perform to acceptable performance levels. Everything worked nice and very responsive through Power BI Desktop and PBI Reporting Services. It was easy to get it to work through those solutions since the scripts sent to the DB was simple and PBI did what was exptected with query folding and sent limited amount of sqls down to the DB. Problem was when it was ran throught the frame in our company portal. This report also uses Row Level Sequrity to add another level of complexity.

There where several issues here. First it did not send the same queries as it did through the service and desktop. It sent several queries and returned insane amount of data that wasn't needed in the report.

I found that the main reason for this generation of unnessasary queries was due to the fact that in the list on the Direct Query report i had used a few dimensional fields. This causes the engine to create several queries to create the list. I created a massive matrix of all possible combinations of the dimension data which it pulled from the DB, even though it only needed 1 combination which was already defined in the drillthrough filters. It used this matrix to filter against the filters set in the report, but this killed the performance, and it was really hard to tune the indexes correctly due to the several combinations of scripts sent, bet the inmodel work was what killed performance.

Solution:
I ended up trimming the data in the Direct Query source. This is always a good solution, but i also had to add a few fields here.
The solution was to force PBI to send only 1 query and actually listen to the filters you have set in the drillthrough to be used. I managed to change the driect query report to ONLY use fields from the DQ source. By doing this i was able to get PBI to only send 1 query even when ran through embedded. Now i was able to tune the indexes perfectly. Now the users are able to serach through detailed transactions, produce lists connected to certain settlement sessions, or produce a list af all transactions for a given month. Merchants with only a few thousand rows gets instant response, and merchants with around 100k transactions gets responce on 2-3 sec.

So to be able to solve this issue, be sure to only use data from one datasource, and especially not mix with the imported model.

@v-bmanikante suggested splitting into 2 reports. This would in theroy have fixed this problem since the DQ report would probably only had data from that datasource. But i was reluctant to do this since the report already had pased PEN test, and didn't want to hire in a new PEN test team due to implementation of a second report. And not sure the URL query string filter would have passed PEN test.
Hope this gives a few hints for others with similar issues to know what to look for in their solution.

Regards,
Kjetil Blomsøy

Helpful resources

Announcements
August Power BI Update Carousel

Power BI Monthly Update - August 2025

Check out the August 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.