Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I have several copy jobs (the "copy job" experience) reading .parquet from a lakehouse, these files are deliverd via Golden Gate, and writing them to a fabric warehouse.
I have ~15 but the larger ones (3 of them) eventually fail and have to be rebuilt. When I say larger I mean several hundred columns with hundreds / thousands of row updaes per hour.
It will run ok for a day or three then fail out with the following error:
ErrorCode=MissingSchemaForAutoCreateTable,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to auto create table for no schema found in Source.,Source=Microsoft.DataTransfer.TransferTask,'
Any advice would be apprecited.
Hi @Jeff_Schreck ,
Thanks for reaching out to Microsoft Fabric Community.
In addition to what was already shared by @burakkaragoz, there’s a known issue related to pipeline-based Copy Activity in Fabric that shows the same error - MissingSchemaForAutoCreateTable. This occurs when the destination is an empty Lakehouse table and the schema can’t be inferred from the source.
Known issue - Pipeline can't copy an empty table to lakehouse - Microsoft Fabric | Microsoft Learn
While this applies to pipelines, the underlying issue - missing or undetectable schema in the source - may be relevant to Copy Jobs as well. If your job is relying on auto table creation, it’s worth checking if the Mapping step has column definitions set manually using the “+ New Mapping” option. This can help avoid failures caused by inconsistent or missing schema in the source .parquet files.
Also, double-check that none of the .parquet files in the source folder are empty or missing metadata, and confirm that the Golden Gate pipeline preserves schema consistently across all files.
Hope this helps. Please reach out for further assistance.
If this post helps, then please consider to give a kudos and Accept as the solution to help the other members find it more quickly.
Thank you.
Hi @Jeff_Schreck ,
Sorry to hear you’re running into this. For those larger copy jobs, that error usually points to an issue where the parquet files either don’t have a defined schema or the schema isn’t being detected by the copy job process. Sometimes, schema inference fails if the data in your files is too varied or if some files are missing required metadata.
A couple things to check:
Let me know if you already tried these or if you need some sample script for schema validation.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
User | Count |
---|---|
82 | |
42 | |
16 | |
11 | |
7 |
User | Count |
---|---|
92 | |
88 | |
27 | |
8 | |
8 |