Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi everypeeps
So we have a large reporting sutie, running to hundreds, maybe even a thousand pages overall of reports. The data is brought in from a variety of sources from SQL to OData to Snowflake to Excel files stored on SharePoint. We often have to combine different datasources for a report, and most of the reports still run off their own semantic model (maybe shared with 3-4 other reports, maybe not)
We were getting a lot of inconsistency between reports for the same metrics, so we tried consolidating the semantic models into a single, large semantic model. This has proved to be quite unweildy to use, and can force the computers most of us use to write/edit the reports and do ETL on the data to massively bottleneck the process (it can take a couple of minutes on one of our "standard" machines between hitting the "New Measure" button and the bar becoming available to type DAX into.
Our next plan is to move all the semantic modelling into SQL, and just have a single source of truth database. Fabric is not an option as my COO refuses due to the variable costs and not easily being able to come up with solid numbers on the kind of processing power we'll need until we actually start loading loads of data into it. Our budgets are very small (UK education sector - really feeling the squeeze right now)
Trouble is this will still leave measures that have to be done in Power BI itself, SQL could give us a student's attendance YTD or for a given week or half term or whatever as an aggregated single figure, but people want custom date ranges or groups of students and I don't see any way SQL can do that kind of calculation and still give us remotely friendly data to work with.
I have thought about having a single sematic model by area, such as attendance, assessment, rewards, etc. But we have a number of reports that combine these things, such as attendance and behaviour, or interventions and assessment. And when you combine semantic models in a single report you can't do edits very easily on either. I'm coming more and more to the conclusion this is the only way we're going to get consistency in the reports though. Also looking at splitting the reports into different "topics" as best we can and using Power BI apps to deliver them out to the organisation.
Is there a better way to approach this? I know it's a wide open question so even recommendations for blogs, books, youtube channels, etc to go have a look through would be appreciated. I've come from an environment where not only did we have a single database to extract the data from, but we also had much less complexity, into an organisation where there's a vast and complex report suite already in place (which I am looking to simplify) but the constant complaint right now is "this report says X, that report says Y for the same metric" (sometimes one of those reports is something we haven't even touched for weeks or months) and it's not only damaging our reputation among the consumers of the reports but it's so g'dd'mn frustrating!
Solved! Go to Solution.
The solution would be to have a Data Governance body that actually has teeth, money, and C suite support. You can have endorsed and (especially) certified semantic models, and you can have policies in place that anyone not using these will get inferior support etc. I would love to see that becoming reality some day.
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
For any sufficiently large company "single source of truth" is an unattainable myth. The raw data maybe (via a data lake with medallion architecture). But the semantic models need to be tailored to the individual business question for each report. Trying to stuff everything into a single data model with stumble at the 50K cardinality hurdle and come to a screeching halt at the 1M row limit for Direct Query.
Thanks Ibendlin
So what's the solution to having some complex DAX measure in multiple different semantic models, it needing to change, and keeping track of everywhere that complex DAX measure exists so all the copies of it can be updated simultaneously to avoid inconsistency between models? Our different schools want to track and report in different ways and it would be counter-productive for us to force them into alignment just because it will make the data management easier, so we can't easily have standardised reports, but we also need to report at the high level on metrics between different schools without the reports giving out different numbers.
I can think of ways to do it but none of them are great, either we have large inconsistency risks or an overbearing admin load for the data team...
The solution would be to have a Data Governance body that actually has teeth, money, and C suite support. You can have endorsed and (especially) certified semantic models, and you can have policies in place that anyone not using these will get inferior support etc. I would love to see that becoming reality some day.
We can dream... 🙈🙉🙊
Check out the July 2025 Power BI update to learn about new features.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
26 | |
19 | |
18 | |
13 | |
11 |
User | Count |
---|---|
32 | |
21 | |
19 | |
18 | |
15 |