Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredPower BI is turning 10! Let’s celebrate together with dataviz contests, interactive sessions, and giveaways. Register now.
I have daily data that is grouped, sorted and then the last record is selected. I've then expanded one field of this record but, in the process, it changes the data type to general, instead of number (as it was originally). It's important that I use the most optimised route possible here... Is it hurting performance if the data type changes here and I manually convert it back to a number data type? If so, is there a way I can do something similar while retaining the original number data type?
Hi @emarc1,
From practical point of view, this does not slow your query, PQ is quite good at optimising this type of commands.
If you don't likea having an extra step in your query, you can push the type into the Table.Group step.
This will look something like this:
step = Table.Group(previous_step, {"Col1", "ColB"}, {{"Last", List.Last([ColC]), type number}})
Cheers,
John
All the grouped columns have their type specified. They just seem to be lost when expanding in this way. I was under the impression that switching to strings would hit performance. I'll just manually switch it back to a decimal number again before the next steps. I'm manipulating data from our system so that I can work out stock value for every item, on each day, over a few years. The rows and perfomance hits quickly add up with multiple merges and group operations. It was really starting to get slow so I've been trying to optimise anything I can. I recognised that some Table.NestedJoins and expands were a lot faster when switched for Table.Joins and removing the excess columns. So at least it's usable for now!
You can also check on the Group.Kind parameter in Table.Group. If you can guarantee a sequential order in the grouped table, this can improve performance dramatically.
Also in csome cases you can use Table.Combine instead of Table.Join (typcally, when you wan to calculate the sum or difference of the merged columns. This also can improve your performance, especially if you are going to group the data at a later step anyway.
I'll check that out. I did experiment with JoinAlgorithm.SortMerge on Table.Join too, which seems similar, but it wasn't acting quite as predictably as I'd hoped... I think because I was merging by two columns in each table (item code, and date). Maybe it'd work better if I made a key field from those columns.