Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Hi Community,
I need to copy data from a fabric lakehouse to snowflake database. What are the different ways I can dump the data?
Thank You!
Solved! Go to Solution.
hello @rakeshdasari
there are multiple ways to do this.
each way has different tradeoff
Direct pipelines are faster but require staging; external tables avoid copying but require upfront permissions.
OneLake External Tables
Expose Fabric Iceberg tables directly in Snowflake via external volumes for real-time querying without physical data copying.
https://www.snowflake.com/en/blog/microsoft-partnership-enhancing-interoperability
Azure Blob Intermediate
Stage data in Azure Blob Storage first for large/complex datasets, then use Snowflake’s `COPY INTO` to load from Blob (ideal for batch transfers).
Pipeline Copy Activity
Use Fabric’s built-in connector to directly map Lakehouse data to Snowflake via the Copy Data Assistant, requiring manual staging configuration (Azure Blob) for optimal performance.
if this is helpful please accept the answer
Hi @nilendraFabric,
I hope I can use this thread for the following problem:
I created a copy job option to copy a lakehouse table to Snowflake and specified my destination table: <my_schema>.<my_table>
The issue is that the copy job try to recreate the schema I provided by running the statement:
"CREATE SCHEMA IF NOT EXISTS ...." in Snowflake.
The copy job ends up thus with a failed status.
The role used (default role for <my_user> ) doesn't allow to create a schema in the given database.
Also I don't get to see "Additional connection properties" in my Snowflake connection (wrong licence type)?
Without it I can't specify a Pre-copy script that might prevent the "create schema" issue.
So I'm kind of stuck.
Is there something else I can do to make my copy job work?
hello @rakeshdasari
there are multiple ways to do this.
each way has different tradeoff
Direct pipelines are faster but require staging; external tables avoid copying but require upfront permissions.
OneLake External Tables
Expose Fabric Iceberg tables directly in Snowflake via external volumes for real-time querying without physical data copying.
https://www.snowflake.com/en/blog/microsoft-partnership-enhancing-interoperability
Azure Blob Intermediate
Stage data in Azure Blob Storage first for large/complex datasets, then use Snowflake’s `COPY INTO` to load from Blob (ideal for batch transfers).
Pipeline Copy Activity
Use Fabric’s built-in connector to directly map Lakehouse data to Snowflake via the Copy Data Assistant, requiring manual staging configuration (Azure Blob) for optimal performance.
if this is helpful please accept the answer
Hii @nilendraFabric , while connecting to snowflake using copydata activity, I am getting an Authentication issue (Invalid credentials) inspite of providing the valid credentials. Do you know why is this happening?
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 2 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |