Fabric is Generally Available. Browse Fabric Presentations. Work towards your Fabric certification with the Cloud Skills Challenge.
Hello there,
Before everything a big thanks, for any help received.
Well, I'm not a BI developer, I'm actually a C# game developer adventuring here.
I have an extensive data set, from one of the games I'm developing, registering some data about the gameplay for each match for each user.
With a small data set, I could flatten all the matches, as I needed, with the code:
my issues appear applying this to the real production data, which has more than 26k users (in this code, the total rows there, and then each user could have N number of matches recorded).
The query runs(is running) for more than 1hour, and I don't know the end of it.
Is there a way to optimize this first query, just flattening the records into one big table that I'll use later to generate insights ?
Thanks in advance
There's probably a way to optimize your M code, but in your situation where you're in charge of the app, I'd flatten the JSON into one or more tables (depending on data structure) in a staging database which I'd use as the source for Power BI. Power Query is convenient but it's not fast. Also you won't be able to do incremental refresh with a file source so things will only get worse as your dataset grows over time.
Check out the November 2023 Power BI update to learn about new features.
Read the latest Fabric Community announcements, including updates on Power BI, Synapse, Data Factory and Data Activator.