Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreGet certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now
Within Power Query, using the desktop, I expand a column of records into multiple columns. No issue there, except that the values I get are sort of stair-stepped with mostly null values around them like this:
I found a solution for this before - use Group By within PQ, set aggregate to MIN (eventhough it's mostly text) and it seems to work fine. However, it seems to bog the queries down a bunch because I have 50 or so columns to deal with, and I'm already at 20+ minutes to update less than 2,000 lines.
So I was wondering if there is a better way to do this? I'm trying to move things from PQ into the DAX area, so using DAX would be great, but I can't find a solution there either.
Solved! Go to Solution.
Maybe try
Table.TransformColumns(Source, {"parameters", Record.Combine})
On the step when parameters is still a list. This will provide cleaner expansion if each record does not have fields with null values. I am hoping the nulls just come because you have a bunch of different records with different field names and when expanding the records at the same time you get nulls because the fields don't exist in other records. When you click on an invidual cell you can see what the record looks like below.
If records that are being combined have the same field, it will be overwritten by the latest value.
Combining [A = 1] with [A = null, B = 2] will give you [A = null, B = 2].
Combining [A = 1] with [B = 2] will give you [A = 1, B = 2].
That worked beautifully! For others that are interested, I replaced:
#"Expanded parameters" = Table.ExpandListColumn(#"Prior step", "parameters"),
with:
#"Expanded parameters" = Table.TransformColumns(#"Prior step",{"parameters",Record.Combine}),
Very elegant solution to something that's been bothering me for quite some time. Thank you!!!
I changed the code as you said, to:
It still comes in stepped. Here are the steps it goes through as a visual sequence:
Maybe try
Table.TransformColumns(Source, {"parameters", Record.Combine})
On the step when parameters is still a list. This will provide cleaner expansion if each record does not have fields with null values. I am hoping the nulls just come because you have a bunch of different records with different field names and when expanding the records at the same time you get nulls because the fields don't exist in other records. When you click on an invidual cell you can see what the record looks like below.
If records that are being combined have the same field, it will be overwritten by the latest value.
Combining [A = 1] with [A = null, B = 2] will give you [A = null, B = 2].
Combining [A = 1] with [B = 2] will give you [A = 1, B = 2].
I tried the Table.Buffer idea, although that wasn't easy since the documentation is a bit sparse. I think it saved a tiny bit. I haven't tried the unpivot/pivot solution as I can't see a clean way to do that.
When the data comes in, prior to expansion it looks like this:
Oper. | Primary ID | Pre-Expansion parameters |
A | 1697-1-004 | [Record] |
A | 1697-1-004 | [Record] |
B | 1697-1-004 | [Record] |
B | 1697-1-004 | [Record] |
C | 1697-1-004 | [Record] |
C | 1697-1-004 | [Record] |
After expansion, it looks like this:
Oper. | Primary ID | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 |
A | 1697-1-004 | Pilot | null | null | null | null | null | null | null |
A | 1697-1-004 | null | 4 | null | null | null | null | null | null |
B | 1697-1-004 | Pilot | null | null | null | null | null | null | null |
B | 1697-1-004 | null | null | null | null | null | null | 0/5/2/0 | null |
B | 1697-1-004 | null | null | null | null | null | null | null | 6 |
C | 1697-1-004 | null | null | 6 | null | null | null | null | null |
C | 1697-1-004 | null | null | null | 20 | null | null | null | null |
C | 1697-1-004 | null | null | null | null | 6 | null | null | null |
C | 1697-1-004 | null | null | null | null | null | 20 | null | null |
Note that many are delimited, so my number of columns grows quickly too. Note that after expansion, each parameter row only has one useful bit of information.
My current solution is to bring all of this into one query (Process_Table). I then create a new query using this:
Source = Table.SelectRows(Process_Table, each ([Operation] = "B"))
followed by removal of a bunch of columns. I can then 'group by' and remove the nulls without much problem. Again, not sure this is the best way, with "best" meaning quickest to process.
Why do we need the filtering for Operation B? Can we just group by primary ID with operation all rows then change each _ to each Table.FirstN(Table.FillUp(_, Table.ColumnNames(_)),1).
What does each record look like? Is there a way to clean it up and combine it so it does not expand so awkwardly into separate lines? Is there some way to Group By ID and Record.Combine(_[parameter column) ? Something like below:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WclRQ0lEyNLM01zXUNTAwAXKiFQIMFWwVYpQCMnPyS2KUFGKVYnVwKDQCKjSBKnAi0iBs6oyB6syA0rEA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Oper." = _t, #"Primary ID" = _t, #"Pre-Expansion parameters" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Oper.", type text}, {"Primary ID", type text}, {"Pre-Expansion parameters", type text}}),
transform = Table.TransformColumns(#"Changed Type", {"Pre-Expansion parameters", each Expression.Evaluate(_)}), // just converting my example to records
#"Grouped Rows" = Table.Group(transform, {"Primary ID"}, {{"parameters", each Record.Combine(_[#"Pre-Expansion parameters"])}})
in
#"Grouped Rows"
I must apologize as I don't know how to implement your suggestion. What we were given by the data host company is as follows (which may not be optimized):
BasicAuth parameter as "3t54ht343h5h"
lastNDays parameter as N
getData as follows:
(page as number) =>
let
authUrl = "https://app.xyz.com/api/authorization",
endpointUrl = "https://app.xyz.com/api/productions",
random1= Number.Random(),
startTime =
Number.ToText (Date.Year (Date.AddDays (DateTime.FixedLocalNow(),-lastNDays))) & "-" & Number.ToText (Date.Month (Date.AddDays (DateTime.FixedLocalNow(),-lastNDays))) & "-" & Number.ToText (Date.Day (Date.AddDays (DateTime.FixedLocalNow(),-lastNDays))),
Source = Json.Document(Web.Contents(authUrl,
[Headers= [ #"Content-Type" = "application/json",
Authorization = BasicAuth, #"Accept" = "*/*" ],
Content = Json.FromValue([scopes = {"productions_write"}, random=random1])])),
TokenTable= Record.ToTable(Source),
accessToken= #"TokenTable"{2}[Value],
Origin = Json.Document(Web.Contents(endpointUrl,
[Query= [access_token=accessToken, #"start-time-after" = startTime, page= Number.ToText(page) ],
Headers= [access_Token = accessToken, #"start-time-after" = startTime, page = Number.ToText(page) ]]))
in
Origin
And then the start of each query is this:
let
allProductionRecords = List.Generate(()=> [Result = if List.IsEmpty(getProdData(1)) then null else
getData(1), page=1],
each [Result]<>null,
each [Result = if List.IsEmpty(getData([page]+1)) then null else
getData([page]+1),page=[page]+1],
each [Result]),
#"Converted to Table" = Table.FromList(allProductionRecords, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandListColumn(#"Converted to Table", "Column1"),
etc.
Can we change this part of your list generate and check??
each [Result]<>null,
each [Result = if List.IsEmpty(getData([page]+1)) then null else
getData([page]+1),page=[page]+1]
into
each List.IsEmpty([Result]) = false,
each [Result = getData(page),page=[page]+1]
Next, if it is still then we need to see if we can transform the list of [Result] so that it does not expand into such a strange format. It would help to see how some of the [Result] look like.
Here are the applied steps for one of my queries:
In my file I have 13 queries, and most of them are at least similar to the above. In essense, pull in data from API, expand and choose what I want, then do necessary operations. Unfortuneately, the 'parameters' steps is what causes the stair-stepped null value problem, and it's where I have most information. What I'm doing is pulling in all process data that is collected in our manufacturing system (API) and then graphing, analyzing, etc. The company that hosts our data is not very adept at PBI as well, so I'm sure there's some inefficiencies in there too.
I've noticed that filtering takes forever, and so does grouping. Most of the excess time is waiting for API, so it could be more of an API issue than PBI, that's why I'm trying to move as much as I can into the DAX area. I'm a bit of a novice here so I've rebuilt my file from scratch a few times now to keep improving. Over the holidays is rebuild #4 I believe.
This stepped behavior on expansion usually occurs when a couple of columns should be combined prior to expansion and/or combined and then unpivoted. That's why it will help to see some specific examples.
Try putting a Table.Buffer around your first step (or the first step that is a Table). Maybe your query keeps repeating the API call and this can prevent that. If it helps then keep the Table.Buffer. If your first step is a list maybe try a List.Buffer.
Can you please provide some sample data to work with that has a similar structure to what you are working with? You can keep you company data protected by changing it to something that can be shared. For example, click on the Filtered Rows1 step, copy the preview data and paste it in Excel, then change the values so that they are content that can be shared. Then paste the table into a message. Do the same thing with the parameter values so we can see what the structure of those are. Please include an explanation of how the parameter values relate to the previous step so we can sufficiently help. Thanks!
What is happening in the query before the columns are expanded. I think that may be causing the issue. If that is resolved then you can avoid having to fix the problem above.
Maybe click on your unique columns, an unpivot on other columns and then pivot on Attribute and then Value with no aggregation.
Starting December 3, join live sessions with database experts and the Fabric product team to learn just how easy it is to get started.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early Bird pricing ends December 9th.
User | Count |
---|---|
35 | |
16 | |
12 | |
11 | |
9 |
User | Count |
---|---|
45 | |
27 | |
16 | |
14 | |
14 |