Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.
I using the Azure Data Factory to ingest tables inside the Fabric Lakehouse, I did the ingestion from AWS S3 and SQL Server, it worked normally.
When I try to ingest from Azure Database for PostgreSQL and MySQL the tables is going to file folder instead of Table folder.
If I open the folder of each table file, I'll find in some tables, a parquet file and delta logs and other find just the delta logs, but if I try in the tables who has parquet files load inside the table folder, it works and the table appears.
Is there any configuration to do, so they automatically goes to table folder?
Solved! Go to Solution.
After a lot investigation, the solution it's simple:
When tables are ingested into a lakehouse, they initially land in an unconfirmed folder. If you try to access this folder by its table name, you might encounter a message stating that it's not possible to create a shortcut table, and that the table should be moved to a file folder to be kept inside the lakehouse. To resolve this, simply wait a few minutes, then refresh the lakehouse. The table should then appear in the table folder.
If you move the unconfirmed tables to files folder, they won't go to table folder, so you will need to do this in manual form, one per one. It's happenning exatcly this, I was not waiting and moving all the table to files folder.
After a lot investigation, the solution it's simple:
When tables are ingested into a lakehouse, they initially land in an unconfirmed folder. If you try to access this folder by its table name, you might encounter a message stating that it's not possible to create a shortcut table, and that the table should be moved to a file folder to be kept inside the lakehouse. To resolve this, simply wait a few minutes, then refresh the lakehouse. The table should then appear in the table folder.
If you move the unconfirmed tables to files folder, they won't go to table folder, so you will need to do this in manual form, one per one. It's happenning exatcly this, I was not waiting and moving all the table to files folder.
Hi @antoniofarias
Glad that your query got resolved. Please continue using Fabric Community for any help regarding your queries.
Hello @v-nikhilan-msft, thanks to your reply!
Another thing, I said it worked with SQL Server and AWS S3, but I noticed they worked because the tables already exists, so I think just overwrite, but when the table doensn't exist, goes to file folder.
How the tables are been saving, each folder it's a table:
How the files are been saving:
Source:
Sink:
Dataset:
Linked Service:
Hi @antoniofarias
Thanks for using Fabric Community.
Can you please provide me the screenshot of the lakehouse files section and files created using the pipeline?
Understanding the Behavior:
By default, ADF's copy activity treats data from relational databases like PostgreSQL and MySQL as flat structures. It creates separate files for data and schema information (delta logs) when copying.
Configure Linked Services: Ensure your linked services for PostgreSQL and MySQL are configured correctly. Refer to Microsoft documentation for details:
There is a similar issue here:
https://stackoverflow.com/questions/76821736/azure-data-factory-copy-activity-only-copying-table-str...
Hope this helps. Please let me know if you have any further questions.
Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.
Check out the April 2024 Fabric update to learn about new features.
User | Count |
---|---|
5 | |
3 | |
2 | |
2 | |
1 |
User | Count |
---|---|
9 | |
8 | |
8 | |
4 | |
3 |