Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi,
We have a data source that is a table of fixed columns, as well as an additional column with JSON of the form:
{"Column 1": "value", "Column 2": "value", ... }
We have hundreds of records, and potentially hundreds of columns, which we do not know in advance.
We want to display this data in a 'normal' PBI table, (including sorting, filtering, highlighting, ...) dynamically, and after a set of rows have been filtered.
Is this possible in PBI?
Could deneb or vega-lite provide a table visual that can be adapted to this need?
Yes.
Let's say you are importing something like the below table:
SourceTable:
| Fixed Col A | Fixed Col B | Fixed Col C | Variable Json Col |
| 1 | 11 | 21 | {"Column1":"A","Column2":"B1"} |
| 2 | 12 | 22 | {"Column2":"B2","Column3":"C"} |
| 3 | 13 | 23 | {"ColumnA":1,"ColumnB":2} |
| 4 | 14 | 24 | {"ColumnC":3,"ColumnD":4} |
| 5 | 15 | 25 | {"Extra 1":"abc","Extra 2":"def"} |
The following code will parse the JSON column, acertain all the JSON fields across all rows, then expand all of them:
let
Source = SourceTable,
ParseJSON = Table.TransformColumns(Source,{{"Variable Json Col", Json.Document}}),
AllJsonFields = List.Distinct( List.Combine(
List.Transform( ParseJSON[Variable Json Col], Record.FieldNames )
) ),
ExpandAllJsonFields = Table.ExpandRecordColumn(ParseJSON, "Variable Json Col", AllJsonFields )
in
ExpandAllJsonFields
Output:
Note that all the dynamically expanded JSON fields are of type any, so any type-setting will have to be addressed separately. Although, it would be possible to try to do something like automatically setting the type based on the values.
Thanks for the answer.
We had thought of that, but there are tow issues:
- There may be hundreds of columns. Can PBI deal with these?
- For every entity, you will see all the columns (not even ordered) of all other entities.
Any further thoughts?
Q: There may be hundreds of columns. Can PBI deal with these?
A: Yes, although it's almost certainly not useful for analysis. If you provide more context on what kind of reports you want to build, and more specificity on the shape of your data, it would be easier to recommend something. Pasting test data into a table for easy copying into PBI is the best
Q: For every entity, you will see all the columns (not even ordered) of all other entities.
A: This is unavoidable if you want to expand them all in the same table. Are some rows of the same structure as others and it would make sense to split them into separate tables? If it's highly variable without patterns in the data to leverage, I would unpivot all the JSON parsed columns into field/value columns. You can just use unpivot through the UI after expanding. Or change up the query steps to convert it before expanding - something like:
let
Source = SourceTable,
// parse json and then convert to field/value table
ParseJSON =
Table.TransformColumns(
Source,
{
"Variable Json Col",
each Record.ToTable( Json.Document(_) ),
type table [Name=text,Value=any]
}
),
ExpandJsonCol =
Table.ExpandTableColumn(
ParseJSON,
"Variable Json Col",
{"Name", "Value"},
{"Name", "Value"}
)
in
ExpandJsonCol
Output:
Thanks again, very valid points. We tried this as well, and display it in a matrix so it 'looks' like a table. It is as close to a solution as possible.
I will try to attach some data here for exploration.
The complete use-case is the following (I should have explained better in the original post):
We have records from different entities that fail some validation criteria.
(let's say 100 entities in total, each entity with 20 columns, each entity containing 500 failing records)
We will never show a single table for different entities together, we will first filter on each entity separately.
Once the filtering is done, we want to display all the records of this entity in an expanded table.
We could build one report per entity, expanding all its JSON columns, but this is too cumbersome.
Hence the need to filter 1st, and then display the expanded table.
Any thoughts?
Hi, @Montechristos01
Thanks for reaching out to the Microsoft fabric community forum.
If MarkLaf 's response has been helpful to you, you might consider accepting it as the solution . This not only acknowledges MarkLaf 's effort in resolving your issue but also aids other community members with similar problems in finding solutions more swiftly. This is greatly beneficial for the growth of our community.
Of course, if you have your own solution, feel free to share it with us. Looking forward to your response, thank you in advance!
Best Regards,
Leroy Lu
I appreciate MarkLaf's effort and good insights on the issue.
However the original question is not fully answered, so I am hesitating to mark it as an answer.
This would lead others to think this issue has been resolved, but it is not fully.
Hope @MarkLaf you understand
I have not had any opportunities to play around with deneb visuals but would also be interested in seeing a deneb solution to this if it's out there. You may have more luck if you post in the custom visual forum and include dummy data and clear depiction of the output you are looking for: https://community.fabric.microsoft.com/t5/Custom-Visuals-Development/bd-p/CustomVisualsDevelopmentDi...
That said, you can use the matrix visual to dynamically show columns based on an entity slicer (would make sense to enforce single-select on the slicer for your purpose). It sounds like you've already tried this? Here is a quick example walkthrough, regardless.
Set up data using field/value output described previously - not much different here although we are also creating an entity row table separate from the fields tables. Not needed but good to start modeling the data a bit.
SourceData:
| Entity Name | Entity Row ID | Data |
| A | A_1 | {"Entity A_Col 1": 0, "Entity A_Col 2": 1, "Entity A_Col 3": "K" } |
| A | A_2 | {"Entity A_Col 1": 3, "Entity A_Col 2": 4, "Entity A_Col 3": "L" } |
| A | A_3 | {"Entity A_Col 1": 6, "Entity A_Col 2": 7, "Entity A_Col 3": "M" } |
| B | B_1 | {"Entity B_Col 1": "abc123", "Entity B_Col 2": "bcd234"} |
| B | B_2 | {"Entity B_Col 1": "cde345", "Entity B_Col 2": "def456"} |
| C | C_1 | {"Entity C_Col 1": 0, "Entity C_Col 2": 1, "Entity C_Col 3": 2, "Entity C_Col 4": 3, "Entity C_Col 5": 4 } |
| D | D_1 | {"Entity D_Col 1": 1000} |
| D | D_2 | {"Entity D_Col 1": 1001} |
| D | D_3 | {"Entity D_Col 1": 1002} |
| D | D_4 | {"Entity D_Col 1": 1003} |
EntityRows:
let
Source = SourceData,
SelectCols = Table.SelectColumns(
Source,
{"Entity Name", "Entity Row ID"}
)
in
SelectCols
FieldsValues:
let
Source = SourceData,
SelectCols = Table.SelectColumns(Source,{"Entity Row ID", "Data"}),
Parse = Table.TransformColumns(
SelectCols,
{
"Data",
each let json = Json.Document(_) in
Table.FromColumns(
{ Record.FieldNames(json),Record.FieldValues(json) },
type table [Field=text,Value=any]
),
type table [Field=text,Value=any]
}
),
ExpandData = Table.ExpandTableColumn(Parse, "Data", {"Field", "Value"}, {"Field", "Value"})
in
ExpandData
Load these into your model, relate the tables EntityRows -1---M-> FieldValues.
You can then set up a slicer with EntityRows[Entity Name] and a matrix with EntityRows[Entity Row ID] for rows, FieldValues[Field] for columns, and a SELECTEDVALUE( FieldValues[Value] ) for the values.
Here is an example of doing something a little fancier for aggregrations than SELECTEDVALUE, which just defaults to BLANK() when there are multiple values. Here, we will SUM all the values as long as we don't find any text. You can similarly customize the aggregation behavior at the type or even field level - it requires DAX though - no easier automations that I know of.
ValuesSum =
VAR _hasTxt =
NOT ISEMPTY(
FILTER(
VALUES( FieldsValues[Value] ),
IFERROR( NOT ISNUMBER( VALUE( FieldsValues[Value] ) ), TRUE )
)
)
VAR _sumTry =
SUMX(
VALUES( FieldsValues[Value] ),
IFERROR( VALUE( FieldsValues[Value] ), BLANK() )
)
VAR _nonAggTxt =
SELECTEDVALUE( FieldsValues[Value] )
RETURN
IF( _hasTxt, _nonAggTxt, _sumTry )
Edit: fixed a sentence I didn't finish and added DAX for measure I mentioned - I think the editor dropped it when I first tried to post?
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 53 | |
| 42 | |
| 34 | |
| 34 | |
| 21 |
| User | Count |
|---|---|
| 143 | |
| 125 | |
| 100 | |
| 81 | |
| 63 |