Hello there,
Before everything a big thanks, for any help received.
Well, I'm not a BI developer, I'm actually a C# game developer adventuring here.
I have an extensive data set, from one of the games I'm developing, registering some data about the gameplay for each match for each user.
With a small data set, I could flatten all the matches, as I needed, with the code:
my issues appear applying this to the real production data, which has more than 26k users (in this code, the total rows there, and then each user could have N number of matches recorded).
The query runs(is running) for more than 1hour, and I don't know the end of it.
Is there a way to optimize this first query, just flattening the records into one big table that I'll use later to generate insights ?
Thanks in advance
There's probably a way to optimize your M code, but in your situation where you're in charge of the app, I'd flatten the JSON into one or more tables (depending on data structure) in a staging database which I'd use as the source for Power BI. Power Query is convenient but it's not fast. Also you won't be able to do incremental refresh with a file source so things will only get worse as your dataset grows over time.
Join us for a free, hands-on Microsoft workshop led by women trainers for women where you will learn how to build a Dashboard in a Day!
User | Count |
---|---|
104 | |
72 | |
69 | |
47 | |
47 |
User | Count |
---|---|
161 | |
85 | |
76 | |
68 | |
67 |