Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreGet certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now
Hai
We have multiple user groups. Each user group (around 15) has its own focus, its own data. But connected to al those different area's is always the dimension 'payment agreements'.
I suppose we could have a shared dataset for each user group, consisting of a fact table with dimensions.
When the dimension 'payment agreements' is attached to every shared dataset (let's suppose there are 15 datasets), does this mean it has to be refreshed multiple times, so 15 times? Or is there a way so we only have to refresh that dimension of 17 million rows once and then every shared dataset will be updates automatically?
Regards
Ron
Regards
Ron
Solved! Go to Solution.
Hey Ron,
That is one purpose of dataflows, but another is to have independent refresh and to break mega-models up into components for efficency. There is a lot to love about dataflows, documentation link here.
Matt Roche provides the best info on his blog about dataflows, this is the most relevant to read up on: link here as well as here for the data refresh: link here
The refresh schedule of the data flow is independent of any downstream report that uses that data flow but the refresh of the report itself needs to be timed, synchronized with the data flow, so that it follows that of the data flow appropriately. Thus, when the dataset is refreshed it doesnt cause the dataflow tables to refresh over and over.
Have you thought of making that dim a dataflow instead then incorporating that into your dataset? That way you only refresh your dataflow once instead of multiple times.
Hai @gregpetrossian I have to read about that. Thougt dataflows were more for business-use to model their own data.
But what you mean is making a dataflow of that dimension and then that dataflow will be part of the datamodels / shared datasets in Power BI?
Is there somewhere an example where I can read about it?
Regards
Ron
Hey Ron,
That is one purpose of dataflows, but another is to have independent refresh and to break mega-models up into components for efficency. There is a lot to love about dataflows, documentation link here.
Matt Roche provides the best info on his blog about dataflows, this is the most relevant to read up on: link here as well as here for the data refresh: link here
The refresh schedule of the data flow is independent of any downstream report that uses that data flow but the refresh of the report itself needs to be timed, synchronized with the data flow, so that it follows that of the data flow appropriately. Thus, when the dataset is refreshed it doesnt cause the dataflow tables to refresh over and over.
Hi @PowerRon ,
It seems that you have got the solution. If so, can you please accept the helpful answer as solution? Others who have the same request will benefit from this thread. 😀
@PowerRon - Currently it will need to refresh multiple times, once per dataset. Currently.
Hai @Greg_Deckler , your answer fascinates me.
Cause you are saying "Currently"
is there something you know that I don't know ? 🙂
@PowerRon - https://docs.microsoft.com/en-us/power-platform-release-plan/2020wave2/power-bi/planned-features
See Composite models over Power BI datasets and Azure Analysis Services
Hai @Greg_Deckler
I am fairly new to Power BI but read about Composite models. Dual mode, whereby depending on the question Direct Query or Import mode is used.
But I don't fully understand how this helps me in the fact that my main dimensions only have to be refreshed once.
Can you explain with a little example?
Regards
Ron
@PowerRon - In theory, and I don't speak for the product team or Microsoft. But, in theory you would be able to build a composite data model that includes a live connection to a Power BI dataset. So, in theory you could have the data for your dimension that is shared across datasets stored once as it's own dataset that you then connect to live from your other datasets. Make sense?
Starting December 3, join live sessions with database experts and the Fabric product team to learn just how easy it is to get started.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early Bird pricing ends December 9th.
User | Count |
---|---|
34 | |
31 | |
19 | |
12 | |
8 |
User | Count |
---|---|
52 | |
36 | |
29 | |
14 | |
12 |