March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Solved! Go to Solution.
@Raki39 This works by design - The Model's schema in QA is similar to that in Dev, and it still uses the PRODUCT table (even though the data source doesn't have it). Deploy does not erase existing data, so either the new data source have all the tables the model is using, or add a parameter (again controlled by a rule) that somehow in the model prevent using that table.
@LeeBenjamin
Sorry for the late reply.
We have received the same response from the microsoft support team.
It’s a by design behaviour of deployment pipeline for Composite model and we don’t have any workaround other than that to have same data in all environments.
Thanks,
Raki
Hi @Raki39 ,
You can create gen2 in data factory and link it to AAS, then create pipeline in data factory, and pipeline will select the dataflow you just created.
If the pipeline is connected to dataflow, it will reflect the data changes in dataflow.
Dataflow is a thing similar to PowerQuery, it basically has what PowerQuery has.
For more information about Data Factory and Dataflow, please refer to :
What is Data Factory - Microsoft Fabric | Microsoft Learn
Creating a dataflow - Power BI | Microsoft Learn
Please refer to the documentation below for specific steps on how to do the above:
Copy Data from Azure Analysis Service (AAS) through Microsoft Fabric (c-sharpcorner.com)
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
@v-huijiey-msft Thank you some much for the response.
I apologize for any confusion in my previous explaination. To ensure clarity, I've provided addational detail below with an attachment.
"In our setup, we have Product and Sales tables in the AAS dev data source, while only the Sales table exists in the AAS QA and AAS prod data sources. Our Power BI semantic data model is connected to the AAS dev source in the Dev workspace. After deploying the semantic model to the QA and prod workspaces using a deployment pipeline, we noticed that the Product table appears in the deployed QA and prod semantic data models, even though it's not present in the respective AAS data sources. Despite refreshing the metadata, the Product table continues to appear. We're unable to identify the root cause of this issue.
Could you please review it and offeer any possible solution or insight
Regards,
Raki
Hi @Raki39
If I understand right the scenario, then this is by design.
Deployment pipelines pairs two items across neighbour stages so then on future deployments, it knows which item to override on the target stage. The pairing process is done based on the combination of item type+name while in case of conflict, the item's full path (workspace folders) will be considered as well.
With that, it seems like your semantic-model items on Dev and QA are paired, and same is for those on QA and Prod, since they have the same name and item type. Once paired, the item on the target stage will be overwritten, regardless if it has the same tables or not as its source. Fully compare of 2 paired items is only done as part of Deployment pipelines' compare process which in the end presents indidcation of the target item vs. its source item.
Best regards,
Lee
Thanks For the explanation @LeeBenjamin
Despite refreshing the semantic model data with the QA/PROD source after deployment, the metadata isn't updating. Consequently, columns or tables not present in the QA and Prod data sources remain visible in the semantic data model for both environments. Is this expected behavior within a deployment pipeline when using a semantic model? It's worth noting that we've successfully deployed multiple reports with direct Power BI datasets using live connections through the deployment pipeline without encountering this issue, the deployment pipeline without encountering this issue,
@Raki39 This works by design - The Model's schema in QA is similar to that in Dev, and it still uses the PRODUCT table (even though the data source doesn't have it). Deploy does not erase existing data, so either the new data source have all the tables the model is using, or add a parameter (again controlled by a rule) that somehow in the model prevent using that table.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
39 | |
26 | |
15 | |
11 | |
10 |
User | Count |
---|---|
58 | |
52 | |
23 | |
14 | |
11 |