Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
We're running a large model in a Premium workspace as a shared dataset. Users connect to it from their desktops to build reports using Direct Query.
Some users need to work with BYO data and combine it with the data in the shared model. They create composite models using Direct Query to the shared model and combine it with the local tables.
However, they constantly run into the limit of 1000000 rows and get an error, even when using measures (not calculated columns).
If I could help it, I would not like to bring every little piece of data users might need into the big model, but at this point, I don't see how to solve it. This limitation also renders the whole composite model approach almost useless for our scenario.
What is the best practice we could follow?
What we found that it is absolutely critical to understand the cardinality of the link fields on both sides of the composite model. This is what kills the usability - anything above 10K unique values will result in unhappy users, unhappy capacity admins, or both.
In other words - try to drastically reduce cardinality before linking data models.
| User | Count |
|---|---|
| 51 | |
| 23 | |
| 11 | |
| 11 | |
| 11 |