- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Best solution for migrating to Fabric?
Hi There,
I'm about to develop a new small data warehousing solution using Synapse Analytics. I do not have the opportunity to develop this in Fabric yet. However, I'd like to make sure that I can easily migrate this to Fabric at a later date.
I appreciate that Microsoft have not yet announced what migration tools will be available in the future so I am wondering what will likely be the easiest migration path from Synapse to Fabric?
In the short term, is it best to develop a set of delta-based dim and fact tables using:
- Spark pool-based scripts?
- SQL Serverless-based solution?
- Synapse Lake database?
- Other?
I hope that makes sense.
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The really quick answer is that I would use Spark for the data ingestion and transformation and then Serverless Pools for the serving layer (pointing Power BI to the Serverless Pool).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @KevinConanMSFT ,
Great thank you. Just so I understand you, do you mean:
- Use Spark (probably Pyspark) to ingest data AND create dim and fact tables
- Use a Serverless db to point to these dim and fact tables to serve these to Power BI (but not do any data transformation to create dim/fact tables within the serverless db itself?)
Hope that makese sense,
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings!
If you are looking at that route, I would suggest looking at a combination of Spark Pools and Serverless. That will get you into the mindset of using a Lakehouse which will be easier to port to Fabric. Plus, when you do move it to Fabric, you'll get the SQL Endpoint for the Lakehouse which is an easy way to get T-SQL capability over data managed by Spark!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Great, thanks for that. Would you be able to offer a little more perspective on how you would use Spark Pools and Serverless for designing a data warehousing solution? I.e. what you would use Spark and Serverless components for specifically?
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings!
We are building migration tools that will make it easy for you to migrate from a Synapse Dedicated Pool to Fabric.
You can also try out Fabric today to see if it will meet the needs of your new data warehouse.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Great, thanks for the info. However, we don't use dedicated pools.
Are you able to advise which of the other options - e.g. serverless database, lakehouse database or Pyspark scripts would be the easiest solution to develop a data warehousing solution in now and migrate to Synapse in the future?
Thank you!
Mikael

Helpful resources
Subject | Author | Posted | |
---|---|---|---|
08-06-2024 07:54 PM | |||
02-03-2025 03:34 PM | |||
03-03-2025 02:47 AM | |||
10-05-2024 09:49 AM | |||
03-05-2025 07:33 AM |
User | Count |
---|---|
21 | |
14 | |
8 | |
7 | |
2 |
User | Count |
---|---|
32 | |
24 | |
22 | |
15 | |
12 |