Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreWe've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
Hallo!
Ich möchte Messdaten aus einem System laden.
Das Problem ist, dass die Importdaten in einer Zeile importiert werden und man hier schnell das Limit erreicht.
Bsp.:
Startdatum1 (UTC), Wert, Qualität, Status, Minimalwert, Maximalwert,Startdatum2 (UTC), Wert, Qualität, Status, Minimalwert, Maximalwert,Startdatum3 (UTC), Wert, Qualität, Status, Minimalwert, Maximalwert, usw.
Wie kann ich je einen Eintrag Column 1 bis Column 6 in eine Zeile importieren (nach dem "]") und dann
Column 7 - 12 in die zweite Zeile (nach dem "]")
Column 13-18 in die dritte Zeile (nach dem "]") etc.?
Danke schon mal Vorab für eure Hilfe
----------------------------------------
I want to load measurement data from a system.
The problem is that the import data is imported in one line and you quickly reach the limit here. Example: Start date1 (UTC), value, quality, status, minimum value, maximum value,Start date2 (UTC), value, quality, status, minimum value, maximum value,Start date3 (UTC), value, quality, status, minimum value, maximum value, etc.
How can I import an entry from Column 1 to Column 6 (after "]") into one line
and then Column 7 - 12 (after "]") into the second line
Column 13-18 into the third line (after "]"), etc.?
Thank you in advance for your help
Solved! Go to Solution.
Hi @Spurgeon74, I've used sample from @jgeddes.
This query should work with any number of columns. If you don't know how to use it - read note below my post.
v1
Output
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45Wio42NDI3NVXSUTLUMzSwANLu+fkpQMo/GyJmZgaiDcz1LM1igSwk5QaWWJSbmICVm+mZWxoZoWswxa3BVM/Swjw2Vik2FgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Column1 = _t, Column2 = _t, Column3 = _t, Column4 = _t, Column5 = _t, Column6 = _t, Column7 = _t, Column8 = _t, Column9 = _t, Column10 = _t, Column11 = _t, Column12 = _t, Column13 = _t, Column14 = _t, Column15 = _t, Column16 = _t, Column17 = _t, Column18 = _t]),
TransformedList = Table.ToList(Source, (x)=>
[ a = Text.Combine(x, "||"),
b = Splitter.SplitTextByDelimiter("]")(a),
c = List.Transform(b, (x)=> Text.Trim(x, {"|", "[", "]"}))
][c]),
ToTable = Table.FromList(List.Select(TransformedList{0}?, (x)=> Text.Length(x) > 0)),
SplitColumnByDelimiter = Table.SplitColumn(ToTable, "Column1", Splitter.SplitTextByDelimiter("||", QuoteStyle.Csv))
in
SplitColumnByDelimiter
v2 - if you want to preserve column names:
Output
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45Wio42NDI3NVXSUTLUMzSwANLu+fkpQMo/GyJmZgaiDcz1LM1igSzaqo+NBQA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Startdatum (UTC)" = _t, Wert = _t, Qualität = _t, Status = _t, Minimalwert = _t, Maximalwert = _t, #"Startdatum (UTC).1" = _t, Wert.1 = _t, Qualität.1 = _t, Status.1 = _t, Minimalwert.1 = _t, Maximalwert.1 = _t, #"Startdatum (UTC).2" = _t, Wert.2 = _t, Qualität.2 = _t, Status.2 = _t, Minimalwert.2 = _t, Maximalwert.2 = _t]),
TransformedList = Table.ToList(Source, (x)=>
[ a = Text.Combine(x, "||"),
b = Splitter.SplitTextByDelimiter("]")(a),
c = List.Transform(b, (x)=> Text.Trim(x, {"|", "[", "]"}))
][c]),
ToTable = Table.FromList(List.Select(TransformedList{0}?, (x)=> Text.Length(x) > 0)),
SplitColumnByDelimiter = Table.SplitColumn(ToTable, "Column1", Splitter.SplitTextByDelimiter("||", QuoteStyle.Csv), List.FirstN(Table.ColumnNames(Source), List.Count(Text.Split(ToTable{0}[Column1], "||"))))
in
SplitColumnByDelimiter
Hi @Spurgeon74, I've used sample from @jgeddes.
This query should work with any number of columns. If you don't know how to use it - read note below my post.
v1
Output
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45Wio42NDI3NVXSUTLUMzSwANLu+fkpQMo/GyJmZgaiDcz1LM1igSwk5QaWWJSbmICVm+mZWxoZoWswxa3BVM/Swjw2Vik2FgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Column1 = _t, Column2 = _t, Column3 = _t, Column4 = _t, Column5 = _t, Column6 = _t, Column7 = _t, Column8 = _t, Column9 = _t, Column10 = _t, Column11 = _t, Column12 = _t, Column13 = _t, Column14 = _t, Column15 = _t, Column16 = _t, Column17 = _t, Column18 = _t]),
TransformedList = Table.ToList(Source, (x)=>
[ a = Text.Combine(x, "||"),
b = Splitter.SplitTextByDelimiter("]")(a),
c = List.Transform(b, (x)=> Text.Trim(x, {"|", "[", "]"}))
][c]),
ToTable = Table.FromList(List.Select(TransformedList{0}?, (x)=> Text.Length(x) > 0)),
SplitColumnByDelimiter = Table.SplitColumn(ToTable, "Column1", Splitter.SplitTextByDelimiter("||", QuoteStyle.Csv))
in
SplitColumnByDelimiter
v2 - if you want to preserve column names:
Output
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45Wio42NDI3NVXSUTLUMzSwANLu+fkpQMo/GyJmZgaiDcz1LM1igSzaqo+NBQA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Startdatum (UTC)" = _t, Wert = _t, Qualität = _t, Status = _t, Minimalwert = _t, Maximalwert = _t, #"Startdatum (UTC).1" = _t, Wert.1 = _t, Qualität.1 = _t, Status.1 = _t, Minimalwert.1 = _t, Maximalwert.1 = _t, #"Startdatum (UTC).2" = _t, Wert.2 = _t, Qualität.2 = _t, Status.2 = _t, Minimalwert.2 = _t, Maximalwert.2 = _t]),
TransformedList = Table.ToList(Source, (x)=>
[ a = Text.Combine(x, "||"),
b = Splitter.SplitTextByDelimiter("]")(a),
c = List.Transform(b, (x)=> Text.Trim(x, {"|", "[", "]"}))
][c]),
ToTable = Table.FromList(List.Select(TransformedList{0}?, (x)=> Text.Length(x) > 0)),
SplitColumnByDelimiter = Table.SplitColumn(ToTable, "Column1", Splitter.SplitTextByDelimiter("||", QuoteStyle.Csv), List.FirstN(Table.ColumnNames(Source), List.Count(Text.Split(ToTable{0}[Column1], "||"))))
in
SplitColumnByDelimiter
This strategy from goodly may help you. Unstacking Rows into Columns
In order to use this strategy you would need to first transpose your row into a single column and also it requires that the split occurs every 6 columns.
Here is a quick M code example.
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45Wio42NDI3NVXSUTLUMzSwANLu+fkpQMo/GyJmZgaiDcz1LM1igSwk5QaWWJSbmICVm+mZWxoZoWswxa3BVM/SwjxWKTYWAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Column1 = _t, Column2 = _t, Column3 = _t, Column4 = _t, Column5 = _t, Column6 = _t, Column7 = _t, Column8 = _t, Column9 = _t, Column10 = _t, Column11 = _t, Column12 = _t, Column13 = _t, Column14 = _t, Column15 = _t, Column16 = _t, Column17 = _t, Column18 = _t]),
#"Transposed Table" = Table.Transpose(Source)[Column1],
Custom1 = List.Split(#"Transposed Table", 6),
Custom2 = List.Transform(Custom1, each Record.FromList(_, {"Column1", "Column2", "Column3", "Column4", "Column5", "Column6"})),
Custom3 = Table.FromRecords(Custom2)
in
Custom3
Proud to be a Super User! | |
Hello!
The script breaks after the sixth column, but changes all numerical values
Eg, column1 Content before: [[1435622400000
Content after: [[12755
To clarify, the example I created was just made up data with shortened values. Are you replacing my 'Source' line with your actual source?
Proud to be a Super User! | |
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 5 | |
| 4 | |
| 3 | |
| 3 | |
| 2 |
| User | Count |
|---|---|
| 8 | |
| 7 | |
| 6 | |
| 6 | |
| 5 |