Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for a $400 discount.
Register nowGet inspired! Check out the entries from the Power BI DataViz World Championships preliminary rounds and give kudos to your favorites. View the vizzies.
We're running a large model in a Premium workspace as a shared dataset. Users connect to it from their desktops to build reports using Direct Query.
Some users need to work with BYO data and combine it with the data in the shared model. They create composite models using Direct Query to the shared model and combine it with the local tables.
However, they constantly run into the limit of 1000000 rows and get an error, even when using measures (not calculated columns).
If I could help it, I would not like to bring every little piece of data users might need into the big model, but at this point, I don't see how to solve it. This limitation also renders the whole composite model approach almost useless for our scenario.
What is the best practice we could follow?
What we found that it is absolutely critical to understand the cardinality of the link fields on both sides of the composite model. This is what kills the usability - anything above 10K unique values will result in unhappy users, unhappy capacity admins, or both.
In other words - try to drastically reduce cardinality before linking data models.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code FABINSIDER for a $400 discount!
Check out the February 2025 Power BI update to learn about new features.
User | Count |
---|---|
60 | |
34 | |
29 | |
27 | |
27 |
User | Count |
---|---|
53 | |
47 | |
36 | |
15 | |
12 |