Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Spark.read.json offers 2 functionalities:
1. You can impose a schema on top of spark.read.json using spark.read.schema(schema).json(df). This will ensure that only records that fit the schema will be produced as rows in the resultant df
2. You can send the records which don't fit the schema to a badRecordsPath using spark.read.option("badRecordsPath",path).schema(schema).json(df).
While this work as described above in databricks, the same doesn't apply to fabric.
As you can see in the code below, I have created a dataframe of json records on which i want to impose a schema. AS you can see the second record has a string for a value that should be of integertype.
1. In databricks I get only 2 records in validRecordsTemp as expected, and the bad record goes and sits in the defined path
2. In fabric however I get all 3 records with NULL as the value for col1 in record 2
databricks result:
Fabric result:
Example Code:
HI @anawast,
I suppose this may be related to the internal processing, they recognize and convert the invalid value to default values. If you do not want this part existed in result, you may need to do filter operation before they load into the data frame.
Regards,
Xiaoxin Sheng
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.