Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Score big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount

Reply
lukapavcnik
New Member

Slow data fetching

Hello,

I am creating a visual that displays a 3d model from the data it receives. The problem i am having is that at large amount of data around 30k rows the visual takes a very long time to update (around 5-10mins). My assumption is that i have incorrectly setup the capabilities.json, because i wanted to display every row at once, i wanted to use table data view mappings, however it does not support highlighting so i went with categorical. Below i added pictures to the code of capabilities.json, its really important that i show all the data. Is there any way to speed the fetching up or fix the capabilites?

lukapavcnik_0-1622028228629.png

lukapavcnik_1-1622028244833.png

 

 

1 ACCEPTED SOLUTION
dm-p
Super User
Super User

Hi @lukapavcnik,

Your capabilties look okay to me, in terms of setup. It's hard to tell without knowing more about your data though, or seeing how how you're managing this in your code, i.e. whether you're waiting until all segments have loaded or mapping the data view each time, which can add a processing overhead before the next update runs.

From my observations the performance of fetching is contingent on a number of things; most notably the cost of the query that Power BI needs to run against your data model to get the incremental data on each refresh, and the time to transport the results to your visual over HTTP (and, by infrerence, the size of this payload). Also, whether you prevent doing any data processing on the dataView until things are finished (I typically use a status message to indicate rows are being loaded but don't map data until there's no segment left to process).

It's hard to judge without seeing the data, but if the 3D model data (presumably the geometry data role) is sufficiently large then fetching 30K rows at a time could be quite prohibitive on the model, and on the network. It could be worth using your browser dev tools to determine the size of each response from the querydata endpoint to see if there's anything transport-wise that's potentially slowing things down.

In my implementations (whilst I've admittedly not used potentially large fields), I've found that my visual is more responsive if I reduce the window size and I usually use 10K or less rather than 30K. Whilst it needs more fetches, the reduction in overhead of each request does seem to make the total time perform better, but your mileage may vary based on your model and data.

I would say that if you want definitive assistance on this, then it's probably best to reach out to the visuals team directly, or add an issue to the SDK's GutHub repo.

Regards,

Daniel





Did I answer your question? Mark my post as a solution!

Proud to be a Super User!


On how to ask a technical question, if you really want an answer (courtesy of SQLBI)




View solution in original post

1 REPLY 1
dm-p
Super User
Super User

Hi @lukapavcnik,

Your capabilties look okay to me, in terms of setup. It's hard to tell without knowing more about your data though, or seeing how how you're managing this in your code, i.e. whether you're waiting until all segments have loaded or mapping the data view each time, which can add a processing overhead before the next update runs.

From my observations the performance of fetching is contingent on a number of things; most notably the cost of the query that Power BI needs to run against your data model to get the incremental data on each refresh, and the time to transport the results to your visual over HTTP (and, by infrerence, the size of this payload). Also, whether you prevent doing any data processing on the dataView until things are finished (I typically use a status message to indicate rows are being loaded but don't map data until there's no segment left to process).

It's hard to judge without seeing the data, but if the 3D model data (presumably the geometry data role) is sufficiently large then fetching 30K rows at a time could be quite prohibitive on the model, and on the network. It could be worth using your browser dev tools to determine the size of each response from the querydata endpoint to see if there's anything transport-wise that's potentially slowing things down.

In my implementations (whilst I've admittedly not used potentially large fields), I've found that my visual is more responsive if I reduce the window size and I usually use 10K or less rather than 30K. Whilst it needs more fetches, the reduction in overhead of each request does seem to make the total time perform better, but your mileage may vary based on your model and data.

I would say that if you want definitive assistance on this, then it's probably best to reach out to the visuals team directly, or add an issue to the SDK's GutHub repo.

Regards,

Daniel





Did I answer your question? Mark my post as a solution!

Proud to be a Super User!


On how to ask a technical question, if you really want an answer (courtesy of SQLBI)




Helpful resources

Announcements
August Power BI Update Carousel

Power BI Monthly Update - August 2025

Check out the August 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.