Explore and share Fabric Notebooks to boost Power BI insights in the new community notebooks gallery.
Check it out now!Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers. Get Fabric certified for FREE! Learn more
Hi there
for PowerBI and Business Central all claiming to be the same stack, performance sure leaves a lot to be desired.
examples:
When I connect to a Business Central table via the normal integrated ApI connector, the full Data is loaded.
In my example, table is project tasks. the whole data is "only" 47MB, yet the connection takes minutes to refresh the data.
now when I instead use the OData link , connect via OData as source and integrate filters directly into the query, I can cut down the downloaded data to under 5MB. thats not even 10% of the data
I would assume that transforming and cleaning up datasets in PowerBI desktop would also "cleanse" my data request, but it does not seem to work that way at all ?
PowerBI always loads the complete data, then transforms.
which is just...
so. since OData does fall under the same restrictions regarding the general API limitations (number of synchronous calls, Transferrate limitations, etc etc... )
what is the most effective and efficint way to build business central datasets that span several tables ?
since every table has to be called as an individual datasource... I quickly run into afforementioned limitations.
is it even possible to work around that without actually developing direct API datasources (meaning writing own AL code based stuff ) and combine tables into a single API source ?
thanks in advance
I fail to understand what kind of example you would need to understand the problem ?
dataset is > x in size.
PowerBI always loads the complete data before the transformation step. it does this for every data refresh.
loads the data, completes transformation, then refreshes reports with the new transformed data.
API limits of business central are .. strict. timeout limits. bandwith restrictions. limited number of synchronous calls.
so a dataset with two tables uses two calls. 5 tables 5 calls-
if call limit is reached, no new call is allowed to be opened until other calls are finished.
now if a call needs loads of time, because table is too big in size, it runs into a timeout.
timeout for one call in the dataset means the complete data refresh simply fails. it does not restart the call.
this means: limiting number of calls a s much as possible.
limit volume of data as much as possible.
as stated: the built in PowerBI -> BC Connector seems to always load the complete datasource before applying the tranformation, and does this for each refresh
hence my question: how can I optimize datasets without developing my own, optimized API directly in BC ?
is there even a way to do so ?
thanks in advance
Hi @bco_lde ,
The information you have provided is not making the problem clear to me. Can you please explain with an example?
Looking forward to your reply.
Best Regards,
Henry
I fail to understand what kind of example you would need to understand the problem ?
dataset is > x in size.
PowerBI always loads the complete data before the transformation step. it does this for every data refresh.
loads the data, completes transformation, then refreshes reports with the new transformed data.
API limits of business central are .. strict. timeout limits. bandwith restrictions. limited number of synchronous calls.
so a dataset with two tables uses two calls. 5 tables 5 calls-
if call limit is reached, no new call is allowed to be opened until other calls are finished.
now if a call needs loads of time, because table is too big in size, it runs into a timeout.
timeout for one call in the dataset means the complete data refresh simply fails. it does not restart the call.
this means: limiting number of calls a s much as possible.
limit volume of data as much as possible.
as stated: the built in PowerBI -> BC Connector seems to always load the complete datasource before applying the tranformation, and does this for each refresh
hence my question: how can I optimize datasets without developing my own, optimized API directly in BC ?
is there even a way to do so ?
thanks in advance