Can't miss session! The 9 FabCon and SQLCon takeaways the community can't stop talking about. Join us on April 2nd. Register now
Hi,
I'd like to know if it exists some tools to facilitate the data migration to a Fabric SQL Database.
It seems that the Migration Assistant is specific for a data warehouse.
thanks
Solved! Go to Solution.
Hi @AhmedMamdoh ,
Thank you for reaching out to the Microsoft Community Forum.
Hi @neelball and @lbendlin , Thank you for your prompt responses.
Hi @AhmedMamdoh ,
The Migration Assistant is scoped to Fabric Data Warehouse, not the operational Fabric SQL Database.
As mentioned by @neelball , Microsoft is working on migration for moving SQL databases into Fabric.
Please try below alternative workarounds to facilitate the data migration to a Fabric SQL Database.
1. By using copy activity in fabric Data Factory, you can create a pipeline that copies data from SQL server, Azure SQL or Oracle and load directly into Fabric SQL Database tables.
2. By using Fabric Dataflow Gen2, you can output the final tables directly into a Fabric SQL Database.
3. By using Notebooks (PySpark) to load data, you can read the source and insert into SQL Database.
Please refer below sample code.
jdbc_url = "jdbc:sqlserver://<fabric-sql-endpoint>"
df.write \
.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", "dbo.TargetTable") \
.mode("append") \
.save()
Note: The above code works only if your data is in Azure Storage / OneLake, Accessible via JDBC and in CSV/Parquet/Delta. Otherwise you will need to change the above code.
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
Hi @AhmedMamdoh ,
Thank you for reaching out to the Microsoft Community Forum.
Hi @neelball and @lbendlin , Thank you for your prompt responses.
Hi @AhmedMamdoh ,
The Migration Assistant is scoped to Fabric Data Warehouse, not the operational Fabric SQL Database.
As mentioned by @neelball , Microsoft is working on migration for moving SQL databases into Fabric.
Please try below alternative workarounds to facilitate the data migration to a Fabric SQL Database.
1. By using copy activity in fabric Data Factory, you can create a pipeline that copies data from SQL server, Azure SQL or Oracle and load directly into Fabric SQL Database tables.
2. By using Fabric Dataflow Gen2, you can output the final tables directly into a Fabric SQL Database.
3. By using Notebooks (PySpark) to load data, you can read the source and insert into SQL Database.
Please refer below sample code.
jdbc_url = "jdbc:sqlserver://<fabric-sql-endpoint>"
df.write \
.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", "dbo.TargetTable") \
.mode("append") \
.save()
Note: The above code works only if your data is in Azure Storage / OneLake, Accessible via JDBC and in CSV/Parquet/Delta. Otherwise you will need to change the above code.
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
You can use the standard Import/Export wizard that comes with SQL Server, or other types of SSIS packages.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.