The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
We have an embedded powerbi solution in a .NET web app that allows users to visualize their data on-demand. We process data and have an in-memory data table within the app. We used to push this data to a powerbi push dataset but wanted extended functionality like localization of metadata and measures in our dataset so we switched to using XMLA endpoint-based Tabular Object Model datasets. To push data to the dataset we have been hardcoding the string JSON rows in M queries and creating the partitions, this has been bad for performance since the data refresh is slow and increases with the number of partitions that we push.
Was hoping to get some advice on how to get the data into TOM datasets such that it is available quickly / has better data loading performance, similar to how it is with push datasets when pushing rows using the PowerBI Rest API.
M expression being used in partitions:
let Source = Text.ToBinary(\"{json}\"), ParsedJson = Json.Document(Source), JsonToTable = Table.FromRecords(ParsedJson)
in JsonToTable
Why would you not just use the incremental refreshing in Power BI and refresh the same partition multiple times, each time it would then import the additional data?
User | Count |
---|---|
39 | |
14 | |
12 | |
12 | |
11 |
User | Count |
---|---|
49 | |
35 | |
25 | |
21 | |
18 |