Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
tanthiamhuat88
New Member

Fabric Deployment Pipelines

I have a few questions on the Deployment Pipeline.

a) I read from Microsoft documentation that Semantic models are only deployed to the target stage in the form of metadata, that is, the deployment process does not copy the actual data. The target stage will require a refresh to fetch the data into the semantic models. When I test out this, it does not turn out to be the case. In my target stage, when I click the deployed semantic model, I can see all the tables there already. Or do I miss out sometihng?

b) for using the deployment pipeline, will the Dataflow Gen1 dataflows be deployed to the target stage? I read that only for Gen2 then it will. Just want to confirm my understanding. But when I tested with Dataflow Gen1, it is fully deployed to the target stage too.

1 ACCEPTED SOLUTION
Vinodh247
Solution Sage
Solution Sage

Hi,

 

I have tried to differentiat between what Microsoft docs says from what might actually happen in fabric deployment pipelines, hth!

 

Semantic models - "only metadata" vs actual data

  • What Microsoft documentation means by "only metadata is deployed" is that the deployment pipeline process does not trigger a data refresh in the target stage. It takes the model definition (tables, measures, relationships, etc) from source and applies it to target.
  • However, in reality if the target stage already has a dataset with the same ID and there is cached data present, the deployment will overwrite the metadata but leave the existing data in place. That is why after deployment, when you open the semantic model in the target stage, you see all tables already populated, it is showing the previously cached data, not newly fetched data.
  • If you delete the target dataset before deployment, or if the model definition changes enough to force a full data clear (example: removing/replacing tables), you will see the “no data” state and have to refresh.
  • So in practice:
    1. Fresh target dataset (no prior data) > post-deployment, it will be empty until you refresh.
    2. Target dataset with prior cache > post-deployment, data appears instantly, but it is stale until refreshed.

This behaviour is often misunderstood as “deployment is copying data,” but technically it is just leaving the old cache intact.

Dataflows Gen1 in Deployment Pipelines

  • Official documentation says: "Deployment pipelines support Dataflow Gen2" because Gen2 is the new Fabric-native storage layer.
  • Dataflow Gen1 (the Power BI Service version) is not natively supported in Fabric deployment pipelines for proper stage management like Gen2.
  • However, when you test with Dataflow Gen1 inside a deployment pipeline, it may appear to work because the pipeline is essentially copying the Dataflow definition (JSON) to the target workspace. This is more of a “content clone” than a true pipeline-managed object.
  • The subtle difference:
    • Gen2: The deployment pipeline understands lineage, refresh scheduling, env parameters, etc.
    • Gen1: It just replicates the object, but there is no Fabric-native governance or parameterisation support. You can end up with both environments pointing to the same backend source unless you manually change parameters.

 


Please 'Kudos' and 'Accept as Solution' if this answered your query.


Please 'Kudos' and 'Accept as Solution' if this answered your query.

Regards,
Vinodh
Microsoft MVP [Fabric]

View solution in original post

5 REPLIES 5
v-saisrao-msft
Community Support
Community Support

Hi @tanthiamhuat88,

We haven’t heard back from you in a while regarding your issue. let us know if your issue has been resolved or if you still require support.

 

Thank you.

v-saisrao-msft
Community Support
Community Support

Hi @tanthiamhuat88,

Checking in to see if your issue has been resolved. let us know if you still need any assistance.

 

Thank you.

v-saisrao-msft
Community Support
Community Support

Hi @tanthiamhuat88,

Have you had a chance to review the solution we shared by @Vinodh247? If the issue persists, feel free to reply so we can help further.

 

Thank you.

Vinodh247
Solution Sage
Solution Sage

Hi,

 

I have tried to differentiat between what Microsoft docs says from what might actually happen in fabric deployment pipelines, hth!

 

Semantic models - "only metadata" vs actual data

  • What Microsoft documentation means by "only metadata is deployed" is that the deployment pipeline process does not trigger a data refresh in the target stage. It takes the model definition (tables, measures, relationships, etc) from source and applies it to target.
  • However, in reality if the target stage already has a dataset with the same ID and there is cached data present, the deployment will overwrite the metadata but leave the existing data in place. That is why after deployment, when you open the semantic model in the target stage, you see all tables already populated, it is showing the previously cached data, not newly fetched data.
  • If you delete the target dataset before deployment, or if the model definition changes enough to force a full data clear (example: removing/replacing tables), you will see the “no data” state and have to refresh.
  • So in practice:
    1. Fresh target dataset (no prior data) > post-deployment, it will be empty until you refresh.
    2. Target dataset with prior cache > post-deployment, data appears instantly, but it is stale until refreshed.

This behaviour is often misunderstood as “deployment is copying data,” but technically it is just leaving the old cache intact.

Dataflows Gen1 in Deployment Pipelines

  • Official documentation says: "Deployment pipelines support Dataflow Gen2" because Gen2 is the new Fabric-native storage layer.
  • Dataflow Gen1 (the Power BI Service version) is not natively supported in Fabric deployment pipelines for proper stage management like Gen2.
  • However, when you test with Dataflow Gen1 inside a deployment pipeline, it may appear to work because the pipeline is essentially copying the Dataflow definition (JSON) to the target workspace. This is more of a “content clone” than a true pipeline-managed object.
  • The subtle difference:
    • Gen2: The deployment pipeline understands lineage, refresh scheduling, env parameters, etc.
    • Gen1: It just replicates the object, but there is no Fabric-native governance or parameterisation support. You can end up with both environments pointing to the same backend source unless you manually change parameters.

 


Please 'Kudos' and 'Accept as Solution' if this answered your query.


Please 'Kudos' and 'Accept as Solution' if this answered your query.

Regards,
Vinodh
Microsoft MVP [Fabric]

Hi @Vinodh247
Can we say "Dataflows Gen1 can Deploy to target stage through Deployment Pipelines" as we are able depoly using Fabric Deployment Pipelines?

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.