The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi,
As below, new data comes every week and it is being stacking up in the same folder for cumulative.
The thing is its format is not stable and have more then 100 week history so I cannot make each week's dataset to unpivot to append in Power BI Query editor.
Is there any way we can unpivot data source when we fetch them to query editor?
Then no matter how columns changed, there will be no issue to append all the data.
Solved! Go to Solution.
Hi @jeongkim ,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @RicoZhou @Ilgar_Zarbali for the prompt response.
1. Handling 50+ Columns Without Hardcoding
You are right - hardcoding column names won’t scale. To handle any number of columns (even future ones), use:
let
Source = Excel.CurrentWorkbook(){[Name="YourTableName"]}[Content],
// Replace "YourTableName" with your actual table or range name
Unpivoted = Table.UnpivotOtherColumns(Source, {"Block"}, "Week", "Value")
in
Unpivoted
This automatically unpivots all columns except "Block", no matter how many there are or how often they change.
2. "Looping Through Each Block" Automatically
You don’t need a manual loop. Power Query is already row-wise by design.
Once unpivoted as shown above, your data will look like expected for example:
Block Week Value
A Week 1 123
From here, you can:
1.Group by Block
2.Pivot Week if needed
3.Or analyze it as a clean, flat table
So there's no need to loop manually.Power Query handles that for every block and week combo automatically.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thank you.
Hi @jeongkim ,
I hope this information is helpful.If this answers your question, please accept it as a solution,so other community members with similar problems can find a solution faster.
Thank you.
Hi @jeongkim ,
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Hi @jeongkim ,
I want to check if you had the opportunity to review the information provided and resolve the issue.If the responses has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
Hi @jeongkim ,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @RicoZhou @Ilgar_Zarbali for the prompt response.
1. Handling 50+ Columns Without Hardcoding
You are right - hardcoding column names won’t scale. To handle any number of columns (even future ones), use:
let
Source = Excel.CurrentWorkbook(){[Name="YourTableName"]}[Content],
// Replace "YourTableName" with your actual table or range name
Unpivoted = Table.UnpivotOtherColumns(Source, {"Block"}, "Week", "Value")
in
Unpivoted
This automatically unpivots all columns except "Block", no matter how many there are or how often they change.
2. "Looping Through Each Block" Automatically
You don’t need a manual loop. Power Query is already row-wise by design.
Once unpivoted as shown above, your data will look like expected for example:
Block Week Value
A Week 1 123
From here, you can:
1.Group by Block
2.Pivot Week if needed
3.Or analyze it as a clean, flat table
So there's no need to loop manually.Power Query handles that for every block and week combo automatically.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thank you.
Step #3: Dynamically Unpivot Columns with Week Label
Power Query Code Snippet (Step #3)
Here’s the code block for this step after your data is cleaned:
let
// Assume the cleaned data table is named #"Cleaned Table"
Source = #"Cleaned Table",
// Specify which columns to keep (others will be unpivoted)
ColumnsToKeep = {"Week", "Period"},
// Dynamically unpivot all other columns
UnpivotedData = Table.UnpivotOtherColumns(Source, ColumnsToKeep, "KPI Name", "KPI Value")
in
UnpivotedData
Loop through each block, extract the corresponding week label > what is looping each block? how can I do that automatically?
Yes, there is a solution using Power Query (M language) that allows you to dynamically unpivot tables even if their column names and counts differ each week. The goal is to normalize each week’s data to a standard three-column format (Week, Parameter, Value) before appending, regardless of how the parameter columns change weekly.
Here’s how you can handle it:
Hi,
Your suggetion could be the solution but a bit difficult to follow up.
Can you explain #3 step?
fyi, my data looks like this when they are stacked up as raw:
let
Source = SharePoint.Contents("https://XXX.sharepoint.com/sites/XXX", [ApiVersion = 15]),
#"Shared Documents" = Source{[Name="Shared Documents"]}[Content],
#"KR Cell Status" = #"Shared Documents"{[Name="KR Cell Status"]}[Content],
#"Filtered Rows CT" = Table.SelectRows(#"KR Cell Status", each [Name] = "KT" or [Name] = "SKT" or [Name] = "LGU+"),
#"Listed Columns CT" = Table.ColumnNames(#"Filtered Rows CT"[Content]{0}),
#"Expanded Custom CT" = Table.ExpandTableColumn(#"Filtered Rows CT", "Content", #"Listed Columns CT", List.Transform(#"Listed Columns CT", each "Custom/" & _)),
#"Filtered Rows PM Data - KPI Trend" = Table.SelectRows(#"Expanded Custom CT", each ([#"Custom/Name"] = "PM Data - KPI Trend")),
#"Listed Columns PM Data - KPI Trend" = Table.ColumnNames(#"Filtered Rows PM Data - KPI Trend"[#"Custom/Content"]{0}),
#"Expanded Custom PM Data - KPI Trend" = Table.ExpandTableColumn(#"Filtered Rows PM Data - KPI Trend", "Custom/Content", #"Listed Columns PM Data - KPI Trend", List.Transform(#"Listed Columns PM Data - KPI Trend", each "Custom." & _)),
#"Filtered Rows <> null" = Table.SelectRows(#"Expanded Custom PM Data - KPI Trend", each ([Custom.Name] <> null)),
#"Listed Columns File" = Table.ColumnNames(#"Filtered Rows <> null"[Custom.Content]{0}),
#"Expanded Custom File" = Table.ExpandTableColumn(#"Filtered Rows <> null", "Custom.Content", #"Listed Columns File", List.Transform(#"Listed Columns File", each "Custom-" & _)),
#"Removed Other Columns" = Table.SelectColumns(#"Expanded Custom File",{"Custom-Content", "Custom-Name", "Custom-Date created", "Custom.Name", "Name"}),
#"Added Year" = Table.AddColumn(#"Removed Other Columns", "Year", each Text.Start([#"Custom-Name"], 4)),
#"Added Week" = Table.AddColumn(#"Added Year", "Week", each Text.Middle([#"Custom-Name"], 6, 2)),
#"Added Period" = Table.AddColumn(#"Added Week", "Period", each [Year] & [Week]),
#"Changed Type" = Table.TransformColumnTypes(#"Added Period",{{"Custom-Date created", type date}, {"Week", Int64.Type}, {"Period", Int64.Type}, {"Year", Int64.Type}}),
#"Sorted Rows Descending Period" = Table.Sort(#"Changed Type",{{"Period", Order.Descending}}),
#"Distinct Period" = Table.Distinct(Table.SelectColumns(#"Sorted Rows Descending Period", {"Period"})),
#"Top N= 104 Period" = Table.FirstN(#"Distinct Period", 104),
#"Joined Period and File" = Table.NestedJoin(#"Sorted Rows Descending Period", {"Period"}, #"Top N= 104 Period", {"Period"}, "Period Mapping", JoinKind.Inner),
#"Added Weekly" = Table.AddColumn(#"Joined Period and File", "Weekly", each if [Year] = Date.Year(DateTime.LocalNow()) and [Week] >= Date.WeekOfYear(DateTime.LocalNow()) - 40 then "Keep" else "Remove"),
#"Added Monthly" = Table.AddColumn(#"Added Weekly", "Monthly", each if [Week] = 5 or [Week] = 9 or [Week] = 13 or [Week] = 18 or [Week] = 22 or [Week] = 26 or [Week] = 31 or [Week] = 35 or [Week] = 39 or [Week] = 44 or [Week] = 48 or [Week] = 52 then "Keep" else "Remove"),
#"Merged Weekly Monthly" = Table.AddColumn(#"Added Monthly", "Weekly/Monthly", each [Weekly] & " " & [Monthly]),
#"Split Column by Delimiter Frequency | Region" = Table.SplitColumn(#"Merged Weekly Monthly", "Custom.Name", Splitter.SplitTextByEachDelimiter({"_"}, QuoteStyle.Csv, false), {"Custom.Name.1", "Custom.Name.2"}),
#"Filtered Rows <> Remove Remove" = Table.SelectRows(#"Split Column by Delimiter Frequency | Region", each ([#"Weekly/Monthly"] <> "Remove Remove")),
#"Add Custom Table for Sheet"= Table.AddColumn(#"Filtered Rows <> Remove Remove", "Custom", each Excel.Workbook([#"Custom-Content"])),
#"Listed Columns Sheet" = Table.ColumnNames(#"Add Custom Table for Sheet"[Custom]{0}),
#"Expanded Custom Sheet" = Table.ExpandTableColumn(#"Add Custom Table for Sheet", "Custom", #"Listed Columns Sheet", List.Transform(#"Listed Columns Sheet", each "Custom/" & _)),
#"Filtered Rows 1st Sheet Contains ""System""" = Table.SelectRows(#"Expanded Custom Sheet", each Text.Contains([#"Custom/Name"], "System") or Text.Contains([#"Custom/Name"], "system")),
#"Filtered Rows 5G" = Table.SelectRows(#"Filtered Rows 1st Sheet Contains ""System""", each ([Custom.Name.1] = "5G")),
#"Listed Columns Value" = Table.ColumnNames(#"Filtered Rows 5G"[#"Custom/Data"]{0}),
#"Expanded Custom Value" = Table.ExpandTableColumn(#"Filtered Rows 5G", "Custom/Data", #"Listed Columns Value", List.Transform(#"Listed Columns Value", each "Custom^" & _)),
Hi @jeongkim ,
If you can ensure that there are two rows between the weekly data like your screenshot, you can try this workaround.
Sample:
M Code:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WCk9NzTZU0lFCQbE60UoBjkBWgBOIcAYRLiDCFSZraQlkYSPAstjMwyYGstyIgOVwewPcQIQ7fuuBBKkOMCbgAA8Q4Ykkh9XbILlYAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Column1 = _t, Column2 = _t, Column3 = _t, Column4 = _t, Column5 = _t, Column6 = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Column1", type text}, {"Column2", type text}, {"Column3", type text}, {"Column4", type text}, {"Column5", type text}, {"Column6", type text}}),
#"Added Custom" = Table.AddColumn(#"Changed Type", "Week", each if Text.Contains([Column1], "Week") then [Column1] else null),
#"Filled Down" = Table.FillDown(#"Added Custom",{"Week"}),
#"Filtered Rows" = Table.SelectRows(#"Filled Down", each not Text.Contains([Column1], "Week") and [Column1] <> ""),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Filtered Rows", {"Week"}, "Attribute", "Value"),
#"Added Custom1" = Table.AddColumn(#"Unpivoted Other Columns", "Parameter", each if Text.Contains([Value], "P") then [Value] else null),
#"Added Custom2" = Table.AddColumn(#"Added Custom1", "Custom", each let _Week = [Week], _Attribute = [Attribute] in Table.SelectRows(#"Added Custom1", each [Week] = _Week and [Attribute] = _Attribute and [Parameter] <> [Value]) [Value]),
#"Expanded Custom" = Table.ExpandListColumn(#"Added Custom2", "Custom"),
#"Added Conditional Column" = Table.AddColumn(#"Expanded Custom", "Flag", each if [Value] <> [Custom] then [Custom] else null),
#"Filtered Rows1" = Table.SelectRows(#"Added Conditional Column", each ([Flag] = "99")),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows1",{"Attribute", "Value", "Custom"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Columns",{{"Flag", "Value"}})
in
#"Renamed Columns"
Result is as below.
Best Regards,
Rico Zhou
Sorry but it seems this part speficied the number of column but my data has over 50 columns and cant expect how many more in the future.
[Serialized.Text = true]