Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
I have one source account file from which I’m deriving 12 different tables in Dataflows, all of which are driven from the same account file.
For example, in the account file there is a column called SegmentAllocation that contains nested JSON. In my scenario, I’m expanding this JSON structure (which is sourced from the account file) into multiple columns/tables.
My dataflow runs successfully for all 12 derived tables plus the main account table. However, for 4 of these tables—whose destination is a Lakehouse—the entire table is populated with NULL values; every row shows NULL for all columns.
I’ve tried several troubleshooting steps:
Dropping the destination tables and recreating them with the same schema as defined in the dataflow.
Ensuring the schema fully matches both the dataflow and the Lakehouse destination.
Re‑mapping the dataflow to the default data destination.
Despite all of this, these 4 tables still end up with completely NULL values.
Solved! Go to Solution.
Did you maybe forget to specify the column types before sending the results to the Lakehouse?
Hi @negideepika06 , Hope you're doing okay! May we know if it worked for you, or are you still experiencing difficulties? Let us know — your feedback can really help others in the same situation.
Hi @negideepika06 , hope you are doing great. May we know if your issue is solved or if you are still experiencing difficulties. Please share the details as it will help the community, especially others with similar issues.
Hi @negideepika06 , Thank you for reaching out to the Microsoft Community Forum.
Since you’re expanding nested JSON, it’s worth validating the last few steps of those 4 queries to ensure they’re actually returning populated records and not empty structures. Also, double check the column to destination mapping in the Lakehouse, as mismatches there can result in all null writes without errors. Write one of the failing queries to a new table, if it still comes out NULL, the issue is in the transformation layer, if not, it’s in the Lakehouse write/mapping.
Did you maybe forget to specify the column types before sending the results to the Lakehouse?
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.