Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
I “migrated” a fabric project at a client and found this situation:
The lookup returns a nested json array instead of a json array yet it reads from a lakehouse table (both without schema) the same data and has the same format.
Could anyone tell me if Fabric has different versions or if there are configurations on Azure that impact Fabric ?
New Fabric Project - Nested Json - don't work
{
"name": "Source_Table",
"value": [[
{
"KeyID": 0,
"Source_Schema": "stg",
"Source_Table": "Assembly",
"Destination_Schema": "dbo",
"Destination_File": "Assembly",
"Destination_Table": "Assembly",
"Fields_Filter": "$systemModifiedAt",
"Last_Update": "2000-01-01",
"Import_Type": 1,
"isActive": 1
},
{
"KeyID": 91,
"Source_Schema": "stg",
"Source_Table": "User",
"Destination_Schema": "dbo",
"Destination_File": "User",
"Destination_Table": "User",
"Fields_Filter": "$systemModifiedAt",
"Last_Update": "2000-01-01",
"Import_Type": 1,
"isActive": 1
}]]}
Project - Array json - work
{
"name": "Source_Table",
"value": [
{
"KeyID": 0,
"Source_Schema": "stg",
"Source_Table": "Assembly",
"Destination_Schema": "dbo",
"Destination_File": "Assembly",
"Destination_Table": "Assembly",
"Fields_Filter": "$systemModifiedAt",
"Last_Update": "2000-01-01",
"Import_Type": 1,
"isActive": 1
},
{
"KeyID": 91,
"Source_Schema": "stg",
"Source_Table": "User View",
"Destination_Schema": "dbo",
"Destination_File": "User View",
"Destination_Table": "User",
"Fields_Filter": "$systemModifiedAt",
"Last_Update": "2000-01-01",
"Import_Type": 1,
"isActive": 1
}]}
Flow created where I take data from a lakehouse delta table with lookup I set a variable filter the data and pass the json to Foreach.
The flow stops at the filter because it doesn't read the json correctly
Solved! Go to Solution.
Hi @giupegiupe ,
In this scenario i suggest you to raise a support ticket here. so, that they can assit you in addressing the issue you are facing. please follow below link on how to raise a support ticket:
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
thanks,
Prashanth Are
MS fabric community support
@giupegiupe As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
@burakkaragoz Thanks for your prompt response
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
The solution works, as I wrote previously, if I do it with a notebook but it does not work using Fabric objects.
I cannot pass the information from a lookup to a variable to a filter or foreach for this fabric tenant, or directly from a lookup to a filter or ForEach.
So it is not clear why the jason array does not work, instead creating a nesetd array jason, in this tenant but on others tenants it works correctly
Thanks
Giuseppe
Hi @giupegiupe ,
In this scenario i suggest you to raise a support ticket here. so, that they can assit you in addressing the issue you are facing. please follow below link on how to raise a support ticket:
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
thanks,
Prashanth Are
MS fabric community support
We are following up once again regarding your query. Could you please confirm if the issue has been resolved through the support ticket with Microsoft?
If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.
Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.
Thank you for your understanding and participation.
I thank you and really appreciated your answer but probably the problem is not in the schemaas I suffer a table read (same in the two tenants) from a lookup but is it possible from a dfferent “configuration” of the tenant?
The problem probably comes from the lookup reading a table with a dozen or so fields. What amazes me is that the same read (direct table no filter no schema no other situations) on two different workspaces and two different subscription and in two different tenats behaves differently, two different json, and yet it, lookup activity and lakehouse, is a standard object.
This means that in the tenant where the lookup doesn't work you can't use direct filter or foreach unless as you say you “flatten” the json (which I did with a notebook).
What I can't understand is how can a standard object reading a lakehouse table have two different behaviors on two different tentants.
The only explanation I can come up with is that there is something in the tenant that affects fabric? different fabric vesion ? Free trial ?
Lookup read a table in a lakehouse
Hi @giupegiupe ,
Yeah, I’ve run into something similar before. When you're working with nested JSON structures—especially ones coming from sources like Azure Data Lake or Dataverse without a defined schema—Fabric sometimes struggles with resolving lookups inside foreach loops, particularly when filters are applied on nested arrays.
From what you shared, it looks like the issue is that the lookup_table is nested too deep, and the filter logic can’t properly resolve the path during runtime. This usually happens when the schema isn’t explicitly defined, so Fabric can’t infer the structure well enough to apply the filter correctly.
A couple of things you might try:
Let me know if you want help rewriting the JSON or filter logic—happy to take a look.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
I thank you and really appreciated your answer but probably the problem is not in the schemaas I suffer a table read (same in the two tenants) from a lookup but is it possible from a dfferent “configuration” of the tenant?
The problem probably comes from the lookup reading a table with a dozen or so fields. What amazes me is that the same read (direct table no filter no schema no other situations) on two different workspaces and two different subscription and in two different tenats behaves differently, two different json, and yet it, lookup activity and lakehouse, is a standard object.
This means that in the tenant where the lookup doesn't work you can't use direct filter or foreach unless as you say you “flatten” the json (which I did with a notebook).
What I can't understand is how can a standard object reading a lakehouse table have two different behaviors on two different tentants.
The only explanation I can come up with is that there is something in the tenant that affects fabric? different fabric vesion ? Free trial ?
Lookup read a table in a lakehouse
@giupegiupe ,You're absolutely right: when your JSON sources come from multiple configurations or schema variants, Fabric may interpret the structure differently depending on the metadata it reads at runtime.
This happens because:
Here’s what I’d suggest:
This kind of issue is common when working with semi-structured data — especially when schema-on-read is involved.
@burakkaragoz I thank you for the explanation, “Fabric tries to infer the pattern dynamically, and if the structure varies (even slightly) between records, it treats them as different shapes...” which seems to fall on the project I am starting.
Regarding your proposed solutions I do not use Dataflow, PowerQuery or Copy data for a matter of cost over Fabric.
The simplest though least maintainable solution, for non-expert user in pyspark, is to use a notebook that reads data from a table and turns it into a json array that can be read by foreach.
Unfortunately, eliminating lookup, filters and variables misses the speed aspect of designing a solution
PS I did not expect this “dynamicity” in standard Fabric objects especially in the output...but...
Totally get your point about avoiding Dataflows and PowerQuery due to cost — makes sense in many setups. If you're sticking with notebooks and PySpark, one thing that might help is explicitly defining the schema when reading the data, especially if you're dealing with nested or inconsistent JSON.
Something like:
from pyspark.sql.types import StructType, StructField, StringType, ArrayType schema = StructType([ StructField("id", StringType(), True), StructField("details", StructType([ StructField("name", StringType(), True), StructField("value", StringType(), True) ]), True) ]) df = spark.read.schema(schema).json("your_path")
This way you avoid the dynamic inference that causes those "different shapes" issues. Also, if you're using foreach, consider flattening the structure before the loop to keep things predictable.
And yeah, you're right — the dynamic behavior in Fabric can be surprising, especially when you're expecting a more rigid schema handling like in traditional pipelines.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
Thanks for your suggestion I used this algorithm
df_selected = df.where("isActive = 1").select(
"ID",
"Source_Schema",
"Destination_Schema",
"Last_Update",
"Import_Type",
"isActive"
)
# Create Json from df
records = df_selected.toJSON().map(lambda x: json.loads(x)).collect()
# JSON array variable available
json_array = json.dumps(records, indent=2)
User | Count |
---|---|
27 | |
15 | |
12 | |
9 | |
7 |
User | Count |
---|---|
39 | |
30 | |
25 | |
19 | |
14 |