March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
I fetch my data from an ODATA feed, why the MPBI does not get all the table in one API call and instead it call to the server on each cell ?
It cause a lot of problems to the server and i cant load big data(table with 50,000 rows, im sure that MPBI can handle much bigger data).
And when i Expand table - The same problems occurs..
HI @Mornagli,
Did you mean to select multiple tables on the navigation pane to get data from the data source at the same time?
AFAIK, current power bi will split these tables into multiple query tables and each table includes a data connector to get data from your data source.
For this scenario, you can try to modify your connection string to get data from the parent level to pull the table list to power bi. Then you can create query tables to reference from the sub-tables that stored in the main table lists.
Regards,
Xiaoxin Sheng
Hi @v-shex-msft , the proccess i make is:
- With MPBI connector to ODATA feed, i insert the msin url of our API, And i get all the tables in Power Query.
- Then i have in specific table - sub table that i want to expand.
- If i expand this sub table, it takes some time but it work, but....
- When i click 'close&apply'- our server is having problems and the data not load...
i want to understand why it is look that for each cell, the MPBI send API call, Why after i insert the URL and get my tables in Power Query, The MPBI not use the values loccaly, it seems that MPBI send a lot of API calls....
In bottom line, Can i make only two API calls:
The first with the URL and then i load the tables that i want to Power Query.
And the second API call, When i Expand.
API server cant handle with 50,000 calls in couple of seconds....
I add my code that i changed manually the data, maybe it will clear some things:
Thank you.
HI @Mornagli,
>>The first with the URL and then i load the tables that i want to Power Query.
Unfortunately, I don't think you can do custom on these parts. These operations are processed by power query engine itself and it hasn't provided the options to optimization them.
For this scenario, I'd like to suggest you do these operations(put data and expand table) out of power bi and only use power bi to get data from the result data tables.
As you know, power bi is a reporting tool, so it may not be good in each situation. In my opinion, use suitable tools to handle correspond operations should simply than force achieved by other tools.
Regards,
Xiaoxin Sheng
Thank you @v-shex-msft .
If i will first load the table without Expand the sub table, click cloase&apply.
abd then i will open again the power query and will expand the sub-table.
Does this will split the amount of the API calls to two? or in the second one the main table will also need API calls again?
And if i will find a way to pull the main table for each year separately, then i will have couple of small tables load and expand(this way, i think the API will handle the calls).
And then will combine them to big one, is it possible ? or it can cause problems in the future if i will want to incremental refresh and more....?
Thank you.
HI @Mornagli,
#1, These subquery reference operations will also redirect to the main data table and active the requests to your database.
#2, If the sublevel records are still compressed and stored into the source data tables, they may not improve the performance.
Here are some detailed comments about my suggestion:
You can split get data steps to add a ‘transit layer‘ that extracts the source data from the root data source and store them into standalone data source or files. (this part should be scheduled transactions used to sync data automatically)
Then you can use a power query data connector to get data from the ‘transit layer‘ instead of 'root level', this should reduce the access to balance the workload for your source database.
In summary, these steps are used to splitted the processes into a simple multi-tier architecture. (power bi report <-refresh scheduler-> standalone datasource <-auto-sync-> database)
Power bi requests will be blocked at the ‘transit layer‘ and not directly affect the root data source. (notice: these operations may cause the data update delay due to the additional ‘transit layer‘)
Regards,
Xiaoxin Sheng
Thank you ! @v-shex-msft ,
Can you give more details about the proccess of the 'transit layer', This sound like a good idea to solve my problem, but what the steps that i need to do ?
Thank you.
HI @Mornagli,
The middle level should be a standalone database or file that you can export data from the raw database or create data flow/web service to invoke API to pull data from the database then sync to the target database.
The simple way is to export data to Excel, CSV, or other types of files then you can use power bi get data from these exported files.
Regards,
Xiaoxin Sheng
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
128 | |
90 | |
75 | |
56 | |
53 |
User | Count |
---|---|
200 | |
104 | |
101 | |
67 | |
55 |