The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
Error details
Error code 21446
Details Invalid table! Parquet column is not defined in delta metadata. Column name: _change_type.
I am trying to preview data for some tables and it is giving me this error.
The service principal user which I am using does have access to the lakehouse as a contributor.
I am able to preview the data and also load the data to a parquet file in ADLS for different entities example: account
But when I am running the piepeline to load the data for some specific tables example : systemuser then the pipeline is failing at the copy data activity with above mentioned error.
and I have also observed this error is not consistent accross the environment, so in QA/UAT environment I may able to read the data for systemuser but may not be able to read data for totally a different entity example opportunity table.
I am curently using Microsoft Fabric Lakehouse Table connector for this requirement.
Could anyone please help me understand the issue why this error is happening and how to overcome this issue?
Solved! Go to Solution.
Hi @Sachin_Patel07,
Just following up to see if the solution provided was helpful in resolving your issue. Please feel free to let us know if you need any further assistance.
Best regards,
Prasanna Kumar
Hi @Sachin_Patel07,
Just following up to see if the solution provided was helpful in resolving your issue. Please feel free to let us know if you need any further assistance.
Best regards,
Prasanna Kumar
Hi @Sachin_Patel07,
You can solve the issue in two ways. The first option is to create a notebook in your Fabric workspace, select only the required columns from the table (excluding problematic ones like _change_type), and then write the data directly to your ADLS Gen2 storage. For this, you’ll need to set up the connection to ADLS using the proper credentials or linked service. The second option is to create a new clean table inside the Lakehouse with only the needed columns, then use your Synapse pipeline’s Copy Activity to pull data from this new table. Both options work the first gives more flexibility, while the second allows you to keep using your existing pipeline setup.
Thanks & regards,
Prasanna Kumar
Hi @Sachin_Patel07,
The best solution is to avoid using the Copy Activity with the Fabric Lakehouse Table connector for now, since it's likely misinterpreting internal system columns like _change_type, causing the error. Instead, use a Fabric Notebook to read the table (which works fine, as you've confirmed) and then write the data directly to your ADLS Gen2 storage in Parquet format using a simple Spark command. This approach bypasses the Copy Activity issue completely and gives you full control over the schema and data handling.
Regards,
Prasanna Kumar
Thanks @v-pgoloju So I will have to create a Notebook in Fabric Workspace and then need to push the table with required columns to the ADLS gen 2 storage ccount container, but the connectivity needs to be established then only I would be able to push data to the ADLS gen 2.
And is there a possibility to create a new table in the lakehouse with the required column and push the data to this table and use copy activity from synapse pipeline to get the data from this table, I know it's a long route but just thinking whether it is possible.
Is your pipeline invoking another pipeline by any chance?
Noo, I am trying to pull data from fabric lakehouse tables to generate a parquet file in ADLS Gen 2 linked to our synapse workspace, but I am getting error at source dataset in the copy activity which I am using to pull the data from fabric
Hi @Sachin_Patel07,
Thank you for reaching out to the Microsoft Fabric Forum Community, and a special thanks to @Thomaslleblanc for their valuable input.
Just following up to see if you had a chance to try the suggested steps creating a new notebook within the Lakehouse, checking for system user tables in the left pane, and attempting to create a dataframe by right clicking on them.
If this helped resolve your issue, please consider marking the appropriate response as the Accepted Solution. If you still need assistance or further clarification, feel free to let us know we’re happy to help!
Best regards,
Prasanna Kumar
Hi, I tried to read the data from this systemuser delta table and I am getting the records from this table:
Can you create a new notebook while inside the Lakehouse? Then, list out the system user tables on the list view in the left pane? If yes, then right click and see if a dataframe can be created with it.
User | Count |
---|---|
17 | |
14 | |
8 | |
7 | |
5 |
User | Count |
---|---|
30 | |
28 | |
20 | |
15 | |
11 |